Sample records for selectivity linearity accuracy

  1. Comparative Study of SVM Methods Combined with Voxel Selection for Object Category Classification on fMRI Data

    PubMed Central

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-01-01

    Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184

  2. Comparative study of SVM methods combined with voxel selection for object category classification on fMRI data.

    PubMed

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-02-16

    Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.

  3. Bridging the gap between marker-assisted and genomic selection of heading time and plant height in hybrid wheat.

    PubMed

    Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C

    2014-06-01

    Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection.

  4. Bridging the gap between marker-assisted and genomic selection of heading time and plant height in hybrid wheat

    PubMed Central

    Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C

    2014-01-01

    Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection. PMID:24518889

  5. Genetic code translation displays a linear trade-off between efficiency and accuracy of tRNA selection.

    PubMed

    Johansson, Magnus; Zhang, Jingji; Ehrenberg, Måns

    2012-01-03

    Rapid and accurate translation of the genetic code into protein is fundamental to life. Yet due to lack of a suitable assay, little is known about the accuracy-determining parameters and their correlation with translational speed. Here, we develop such an assay, based on Mg(2+) concentration changes, to determine maximal accuracy limits for a complete set of single-mismatch codon-anticodon interactions. We found a simple, linear trade-off between efficiency of cognate codon reading and accuracy of tRNA selection. The maximal accuracy was highest for the second codon position and lowest for the third. The results rationalize the existence of proofreading in code reading and have implications for the understanding of tRNA modifications, as well as of translation error-modulating ribosomal mutations and antibiotics. Finally, the results bridge the gap between in vivo and in vitro translation and allow us to calibrate our test tube conditions to represent the environment inside the living cell.

  6. Linear and nonlinear variable selection in competing risks data.

    PubMed

    Ren, Xiaowei; Li, Shanshan; Shen, Changyu; Yu, Zhangsheng

    2018-06-15

    Subdistribution hazard model for competing risks data has been applied extensively in clinical researches. Variable selection methods of linear effects for competing risks data have been studied in the past decade. There is no existing work on selection of potential nonlinear effects for subdistribution hazard model. We propose a two-stage procedure to select the linear and nonlinear covariate(s) simultaneously and estimate the selected covariate effect(s). We use spectral decomposition approach to distinguish the linear and nonlinear parts of each covariate and adaptive LASSO to select each of the 2 components. Extensive numerical studies are conducted to demonstrate that the proposed procedure can achieve good selection accuracy in the first stage and small estimation biases in the second stage. The proposed method is applied to analyze a cardiovascular disease data set with competing death causes. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.

    PubMed

    Erdem, Hamit

    2010-10-01

    Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Estimated breeding values for canine hip dysplasia radiographic traits in a cohort of Australian German Shepherd dogs.

    PubMed

    Wilson, Bethany J; Nicholas, Frank W; James, John W; Wade, Claire M; Thomson, Peter C

    2013-01-01

    Canine hip dysplasia (CHD) is a serious and common musculoskeletal disease of pedigree dogs and therefore represents both an important welfare concern and an imperative breeding priority. The typical heritability estimates for radiographic CHD traits suggest that the accuracy of breeding dog selection could be substantially improved by the use of estimated breeding values (EBVs) in place of selection based on phenotypes of individuals. The British Veterinary Association/Kennel Club scoring method is a complex measure composed of nine bilateral ordinal traits, intended to evaluate both early and late dysplastic changes. However, the ordinal nature of the traits may represent a technical challenge for calculation of EBVs using linear methods. The purpose of the current study was to calculate EBVs of British Veterinary Association/Kennel Club traits in the Australian population of German Shepherd Dogs, using linear (both as individual traits and a summed phenotype), binary and ordinal methods to determine the optimal method for EBV calculation. Ordinal EBVs correlated well with linear EBVs (r = 0.90-0.99) and somewhat well with EBVs for the sum of the individual traits (r = 0.58-0.92). Correlation of ordinal and binary EBVs varied widely (r = 0.24-0.99) depending on the trait and cut-point considered. The ordinal EBVs have increased accuracy (0.48-0.69) of selection compared with accuracies from individual phenotype-based selection (0.40-0.52). Despite the high correlations between linear and ordinal EBVs, the underlying relationship between EBVs calculated by the two methods was not always linear, leading us to suggest that ordinal models should be used wherever possible. As the population of German Shepherd Dogs which was studied was purportedly under selection for the traits studied, we examined the EBVs for evidence of a genetic trend in these traits and found substantial genetic improvement over time. This study suggests the use of ordinal EBVs could increase the rate of genetic improvement in this population.

  9. Accuracy of genomic selection in European maize elite breeding populations.

    PubMed

    Zhao, Yusheng; Gowda, Manje; Liu, Wenxin; Würschum, Tobias; Maurer, Hans P; Longin, Friedrich H; Ranc, Nicolas; Reif, Jochen C

    2012-03-01

    Genomic selection is a promising breeding strategy for rapid improvement of complex traits. The objective of our study was to investigate the prediction accuracy of genomic breeding values through cross validation. The study was based on experimental data of six segregating populations from a half-diallel mating design with 788 testcross progenies from an elite maize breeding program. The plants were intensively phenotyped in multi-location field trials and fingerprinted with 960 SNP markers. We used random regression best linear unbiased prediction in combination with fivefold cross validation. The prediction accuracy across populations was higher for grain moisture (0.90) than for grain yield (0.58). The accuracy of genomic selection realized for grain yield corresponds to the precision of phenotyping at unreplicated field trials in 3-4 locations. As for maize up to three generations are feasible per year, selection gain per unit time is high and, consequently, genomic selection holds great promise for maize breeding programs.

  10. Fourth order Douglas implicit scheme for solving three dimension reaction diffusion equation with non-linear source term

    NASA Astrophysics Data System (ADS)

    Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman

    2017-07-01

    This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.

  11. Liquid electrolyte informatics using an exhaustive search with linear regression.

    PubMed

    Sodeyama, Keitaro; Igarashi, Yasuhiko; Nakayama, Tomofumi; Tateyama, Yoshitaka; Okada, Masato

    2018-06-14

    Exploring new liquid electrolyte materials is a fundamental target for developing new high-performance lithium-ion batteries. In contrast to solid materials, disordered liquid solution properties have been less studied by data-driven information techniques. Here, we examined the estimation accuracy and efficiency of three information techniques, multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), and exhaustive search with linear regression (ES-LiR), by using coordination energy and melting point as test liquid properties. We then confirmed that ES-LiR gives the most accurate estimation among the techniques. We also found that ES-LiR can provide the relationship between the "prediction accuracy" and "calculation cost" of the properties via a weight diagram of descriptors. This technique makes it possible to choose the balance of the "accuracy" and "cost" when the search of a huge amount of new materials was carried out.

  12. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    PubMed

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  13. Three-dimensional repositioning accuracy of semiadjustable articulator cast mounting systems.

    PubMed

    Tan, Ming Yi; Ung, Justina Youlin; Low, Ada Hui Yin; Tan, En En; Tan, Keson Beng Choon

    2014-10-01

    In spite of its importance in prosthesis precision and quality, the 3-dimensional repositioning accuracy of cast mounting systems has not been reported in detail. The purpose of this study was to quantify the 3-dimensional repositioning accuracy of 6 selected cast mounting systems. Five magnetic mounting systems were compared with a conventional screw-on system. Six systems on 3 semiadjustable articulators were evaluated: Denar Mark II with conventional screw-on mounting plates (DENSCR) and magnetic mounting system with converter plates (DENCON); Denar Mark 330 with in-built magnetic mounting system (DENMAG) and disposable mounting plates; and Artex CP with blue (ARTBLU), white (ARTWHI), and black (ARTBLA) magnetic mounting plates. Test casts with 3 high-precision ceramic ball bearings at the mandibular central incisor (Point I) and the right and left second molar (Point R; Point L) positions were mounted on 5 mounting plates (n=5) for all 6 systems. Each cast was repositioned 10 times by 4 operators in random order. Nine linear (Ix, Iy, Iz; Rx, Ry, Rz; Lx, Ly, Lz) and 3 angular (anteroposterior, mediolateral, twisting) displacements were measured with a coordinate measuring machine. The mean standard deviations of the linear and angular displacements defined repositioning accuracy. Anteroposterior linear repositioning accuracy ranged from 23.8 ±3.7 μm (DENCON) to 4.9 ±3.2 μm (DENSCR). Mediolateral linear repositioning accuracy ranged from 46.0 ±8.0 μm (DENCON) to 3.7 ±1.5 μm (ARTBLU), and vertical linear repositioning accuracy ranged from 7.2 ±9.6 μm (DENMAG) to 1.5 ±0.9 μm (ARTBLU). Anteroposterior angular repositioning accuracy ranged from 0.0084 ±0.0080 degrees (DENCON) to 0.0020 ±0.0006 degrees (ARTBLU), and mediolateral angular repositioning accuracy ranged from 0.0120 ±0.0111 degrees (ARTWHI) to 0.0027 ±0.0008 degrees (ARTBLU). Twisting angular repositioning accuracy ranged from 0.0419 ±0.0176 degrees (DENCON) to 0.0042 ±0.0038 degrees (ARTBLA). One-way ANOVA found significant differences (P<.05) among all systems for Iy, Ry, Lx, Ly, and twisting. Generally, vertical linear displacements were less likely to reach the threshold of clinical detectability compared with anteroposterior or mediolateral linear displacements. The overall repositioning accuracy of DENSCR was comparable with 4 magnetic mounting systems (DENMAG, ARTBLU, ARTWHI, ARTBLA). DENCON exhibited the worst repositioning accuracy for Iy, Ry, Lx, Ly, and twisting. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  14. Comparison of discriminant analysis methods: Application to occupational exposure to particulate matter

    NASA Astrophysics Data System (ADS)

    Ramos, M. Rosário; Carolino, E.; Viegas, Carla; Viegas, Sandra

    2016-06-01

    Health effects associated with occupational exposure to particulate matter have been studied by several authors. In this study were selected six industries of five different areas: Cork company 1, Cork company 2, poultry, slaughterhouse for cattle, riding arena and production of animal feed. The measurements tool was a portable device for direct reading. This tool provides information on the particle number concentration for six different diameters, namely 0.3 µm, 0.5 µm, 1 µm, 2.5 µm, 5 µm and 10 µm. The focus on these features is because they might be more closely related with adverse health effects. The aim is to identify the particles that better discriminate the industries, with the ultimate goal of classifying industries regarding potential negative effects on workers' health. Several methods of discriminant analysis were applied to data of occupational exposure to particulate matter and compared with respect to classification accuracy. The selected methods were linear discriminant analyses (LDA); linear quadratic discriminant analysis (QDA), robust linear discriminant analysis with selected estimators (MLE (Maximum Likelihood Estimators), MVE (Minimum Volume Elipsoid), "t", MCD (Minimum Covariance Determinant), MCD-A, MCD-B), multinomial logistic regression and artificial neural networks (ANN). The predictive accuracy of the methods was accessed through a simulation study. ANN yielded the highest rate of classification accuracy in the data set under study. Results indicate that the particle number concentration of diameter size 0.5 µm is the parameter that better discriminates industries.

  15. The linear interplay of intrinsic and extrinsic noises ensures a high accuracy of cell fate selection in budding yeast

    PubMed Central

    Li, Yongkai; Yi, Ming; Zou, Xiufen

    2014-01-01

    To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292

  16. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  17. A polynomial based model for cell fate prediction in human diseases.

    PubMed

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  18. Development and Validation of High Performance Liquid Chromatography Method for Determination Atorvastatin in Tablet

    NASA Astrophysics Data System (ADS)

    Yugatama, A.; Rohmani, S.; Dewangga, A.

    2018-03-01

    Atorvastatin is the primary choice for dyslipidemia treatment. Due to patent expiration of atorvastatin, the pharmaceutical industry makes copy of the drug. Therefore, the development methods for tablet quality tests involving atorvastatin concentration on tablets needs to be performed. The purpose of this research was to develop and validate the simple atorvastatin tablet analytical method by HPLC. HPLC system used in this experiment consisted of column Cosmosil C18 (150 x 4,6 mm, 5 µm) as the stationary reverse phase chomatography, a mixture of methanol-water at pH 3 (80:20 v/v) as the mobile phase, flow rate of 1 mL/min, and UV detector at wavelength of 245 nm. Validation methods were including: selectivity, linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ). The results of this study indicate that the developed method had good validation including selectivity, linearity, accuracy, precision, LOD, and LOQ for analysis of atorvastatin tablet content. LOD and LOQ were 0.2 and 0.7 ng/mL, and the linearity range were 20 - 120 ng/mL.

  19. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    PubMed

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction

    PubMed Central

    Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-01-01

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415

  1. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.

    PubMed

    Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-06-07

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.

  2. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  3. Linear reduction methods for tag SNP selection.

    PubMed

    He, Jingwu; Zelikovsky, Alex

    2004-01-01

    It is widely hoped that constructing a complete human haplotype map will help to associate complex diseases with certain SNP's. Unfortunately, the number of SNP's is huge and it is very costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNP's that should be sequenced to considerably small number of informative representatives, so called tag SNP's. In this paper, we propose a new linear algebra based method for selecting and using tag SNP's. Our method is purely combinatorial and can be combined with linkage disequilibrium (LD) and block based methods. We measure the quality of our tag SNP selection algorithm by comparing actual SNP's with SNP's linearly predicted from linearly chosen tag SNP's. We obtain an extremely good compression and prediction rates. For example, for long haplotypes (>25000 SNP's), knowing only 0.4% of all SNP's we predict the entire unknown haplotype with 2% accuracy while the prediction method is based on a 10% sample of the population.

  4. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  5. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  6. Magnetoencephalogram blind source separation and component selection procedure to improve the diagnosis of Alzheimer's disease patients.

    PubMed

    Escudero, Javier; Hornero, Roberto; Abásolo, Daniel; Fernández, Alberto; Poza, Jesús

    2007-01-01

    The aim of this study was to improve the diagnosis of Alzheimer's disease (AD) patients applying a blind source separation (BSS) and component selection procedure to their magnetoencephalogram (MEG) recordings. MEGs from 18 AD patients and 18 control subjects were decomposed with the algorithm for multiple unknown signals extraction. MEG channels and components were characterized by their mean frequency, spectral entropy, approximate entropy, and Lempel-Ziv complexity. Using Student's t-test, the components which accounted for the most significant differences between groups were selected. Then, these relevant components were used to partially reconstruct the MEG channels. By means of a linear discriminant analysis, we found that the BSS-preprocessed MEGs classified the subjects with an accuracy of 80.6%, whereas 72.2% accuracy was obtained without the BSS and component selection procedure.

  7. The linear sizes tolerances and fits system modernization

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.

    2018-04-01

    The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.

  8. An application of locally linear model tree algorithm with combination of feature selection in credit scoring

    NASA Astrophysics Data System (ADS)

    Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad

    2014-10-01

    Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.

  9. Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators.

    PubMed

    Kim, Pil-Jong; Kim, Hong-Gee; Cho, Byeong-Hoon

    2015-05-01

    The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL). The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC) in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC.

  10. Comparing strategies for selection of low-density SNPs for imputation-mediated genomic prediction in U. S. Holsteins.

    PubMed

    He, Jun; Xu, Jiaqi; Wu, Xiao-Lin; Bauck, Stewart; Lee, Jungjae; Morota, Gota; Kachman, Stephen D; Spangler, Matthew L

    2018-04-01

    SNP chips are commonly used for genotyping animals in genomic selection but strategies for selecting low-density (LD) SNPs for imputation-mediated genomic selection have not been addressed adequately. The main purpose of the present study was to compare the performance of eight LD (6K) SNP panels, each selected by a different strategy exploiting a combination of three major factors: evenly-spaced SNPs, increased minor allele frequencies, and SNP-trait associations either for single traits independently or for all the three traits jointly. The imputation accuracies from 6K to 80K SNP genotypes were between 96.2 and 98.2%. Genomic prediction accuracies obtained using imputed 80K genotypes were between 0.817 and 0.821 for daughter pregnancy rate, between 0.838 and 0.844 for fat yield, and between 0.850 and 0.863 for milk yield. The two SNP panels optimized on the three major factors had the highest genomic prediction accuracy (0.821-0.863), and these accuracies were very close to those obtained using observed 80K genotypes (0.825-0.868). Further exploration of the underlying relationships showed that genomic prediction accuracies did not respond linearly to imputation accuracies, but were significantly affected by genotype (imputation) errors of SNPs in association with the traits to be predicted. SNPs optimal for map coverage and MAF were favorable for obtaining accurate imputation of genotypes whereas trait-associated SNPs improved genomic prediction accuracies. Thus, optimal LD SNP panels were the ones that combined both strengths. The present results have practical implications on the design of LD SNP chips for imputation-enabled genomic prediction.

  11. The accuracy of Genomic Selection in Norwegian red cattle assessed by cross-validation.

    PubMed

    Luan, Tu; Woolliams, John A; Lien, Sigbjørn; Kent, Matthew; Svendsen, Morten; Meuwissen, Theo H E

    2009-11-01

    Genomic Selection (GS) is a newly developed tool for the estimation of breeding values for quantitative traits through the use of dense markers covering the whole genome. For a successful application of GS, accuracy of the prediction of genomewide breeding value (GW-EBV) is a key issue to consider. Here we investigated the accuracy and possible bias of GW-EBV prediction, using real bovine SNP genotyping (18,991 SNPs) and phenotypic data of 500 Norwegian Red bulls. The study was performed on milk yield, fat yield, protein yield, first lactation mastitis traits, and calving ease. Three methods, best linear unbiased prediction (G-BLUP), Bayesian statistics (BayesB), and a mixture model approach (MIXTURE), were used to estimate marker effects, and their accuracy and bias were estimated by using cross-validation. The accuracies of the GW-EBV prediction were found to vary widely between 0.12 and 0.62. G-BLUP gave overall the highest accuracy. We observed a strong relationship between the accuracy of the prediction and the heritability of the trait. GW-EBV prediction for production traits with high heritability achieved higher accuracy and also lower bias than health traits with low heritability. To achieve a similar accuracy for the health traits probably more records will be needed.

  12. Comparison of Classification Methods for P300 Brain-Computer Interface on Disabled Subjects

    PubMed Central

    Manyakov, Nikolay V.; Chumerin, Nikolay; Combaz, Adrien; Van Hulle, Marc M.

    2011-01-01

    We report on tests with a mind typing paradigm based on a P300 brain-computer interface (BCI) on a group of amyotrophic lateral sclerosis (ALS), middle cerebral artery (MCA) stroke, and subarachnoid hemorrhage (SAH) patients, suffering from motor and speech disabilities. We investigate the achieved typing accuracy given the individual patient's disorder, and how it correlates with the type of classifier used. We considered 7 types of classifiers, linear as well as nonlinear ones, and found that, overall, one type of linear classifier yielded a higher classification accuracy. In addition to the selection of the classifier, we also suggest and discuss a number of recommendations to be considered when building a P300-based typing system for disabled subjects. PMID:21941530

  13. Canopy Temperature and Vegetation Indices from High-Throughput Phenotyping Improve Accuracy of Pedigree and Genomic Selection for Grain Yield in Wheat

    PubMed Central

    Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi

    2016-01-01

    Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362

  14. Cannabis with high δ9-THC contents affects perception and visual selective attention acutely: an event-related potential study.

    PubMed

    Böcker, K B E; Gerritsen, J; Hunault, C C; Kruidenier, M; Mensinga, Tj T; Kenemans, J L

    2010-07-01

    Cannabis intake has been reported to affect cognitive functions such as selective attention. This study addressed the effects of exposure to cannabis with up to 69.4mg Delta(9)-tetrahydrocannabinol (THC) on Event-Related Potentials (ERPs) recorded during a visual selective attention task. Twenty-four participants smoked cannabis cigarettes with four doses of THC on four test days in a randomized, double blind, placebo-controlled, crossover study. Two hours after THC exposure the participants performed a visual selective attention task and concomitant ERPs were recorded. Accuracy decreased linearly and reaction times increased linearly with THC dose. However, performance measures and most of the ERP components related specifically to selective attention did not show significant dose effects. Only in relatively light cannabis users the Occipital Selection Negativity decreased linearly with dose. Furthermore, ERP components reflecting perceptual processing, as well as the P300 component, decreased in amplitude after THC exposure. Only the former effect showed a linear dose-response relation. The decrements in performance and ERP amplitudes induced by exposure to cannabis with high THC content resulted from a non-selective decrease in attentional or processing resources. Performance requiring attentional resources, such as vehicle control, may be compromised several hours after smoking cannabis cigarettes containing high doses of THC, as presently available in Europe and Northern America. Copyright 2010 Elsevier Inc. All rights reserved.

  15. EEG-based mild depressive detection using feature selection methods and classifiers.

    PubMed

    Li, Xiaowei; Hu, Bin; Sun, Shuting; Cai, Hanshu

    2016-11-01

    Depression has become a major health burden worldwide, and effectively detection of such disorder is a great challenge which requires latest technological tool, such as Electroencephalography (EEG). This EEG-based research seeks to find prominent frequency band and brain regions that are most related to mild depression, as well as an optimal combination of classification algorithms and feature selection methods which can be used in future mild depression detection. An experiment based on facial expression viewing task (Emo_block and Neu_block) was conducted, and EEG data of 37 university students were collected using a 128 channel HydroCel Geodesic Sensor Net (HCGSN). For discriminating mild depressive patients and normal controls, BayesNet (BN), Support Vector Machine (SVM), Logistic Regression (LR), k-nearest neighbor (KNN) and RandomForest (RF) classifiers were used. And BestFirst (BF), GreedyStepwise (GSW), GeneticSearch (GS), LinearForwordSelection (LFS) and RankSearch (RS) based on Correlation Features Selection (CFS) were applied for linear and non-linear EEG features selection. Independent Samples T-test with Bonferroni correction was used to find the significantly discriminant electrodes and features. Data mining results indicate that optimal performance is achieved using a combination of feature selection method GSW based on CFS and classifier KNN for beta frequency band. Accuracies achieved 92.00% and 98.00%, and AUC achieved 0.957 and 0.997, for Emo_block and Neu_block beta band data respectively. T-test results validate the effectiveness of selected features by search method GSW. Simplified EEG system with only FP1, FP2, F3, O2, T3 electrodes was also explored with linear features, which yielded accuracies of 91.70% and 96.00%, AUC of 0.952 and 0.972, for Emo_block and Neu_block respectively. Classification results obtained by GSW + KNN are encouraging and better than previously published results. In the spatial distribution of features, we find that left parietotemporal lobe in beta EEG frequency band has greater effect on mild depression detection. And fewer EEG channels (FP1, FP2, F3, O2 and T3) combined with linear features may be good candidates for usage in portable systems for mild depression detection. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Development and Preliminary Testing of a High Precision Long Stroke Slit Change Mechanism for the SPICE Instrument

    NASA Technical Reports Server (NTRS)

    Paciotti, Gabriel; Humphries, Martin; Rottmeier, Fabrice; Blecha, Luc

    2014-01-01

    In the frame of ESA's Solar Orbiter scientific mission, Almatech has been selected to design, develop and test the Slit Change Mechanism of the SPICE (SPectral Imaging of the Coronal Environment) instrument. In order to guaranty optical cleanliness level while fulfilling stringent positioning accuracies and repeatability requirements for slit positioning in the optical path of the instrument, a linear guiding system based on a double flexible blade arrangement has been selected. The four different slits to be used for the SPICE instrument resulted in a total stroke of 16.5 mm in this linear slit changer arrangement. The combination of long stroke and high precision positioning requirements has been identified as the main design challenge to be validated through breadboard models testing. This paper presents the development of SPICE's Slit Change Mechanism (SCM) and the two-step validation tests successfully performed on breadboard models of its flexible blade support system. The validation test results have demonstrated the full adequacy of the flexible blade guiding system implemented in SPICE's Slit Change Mechanism in a stand-alone configuration. Further breadboard test results, studying the influence of the compliant connection to the SCM linear actuator on an enhanced flexible guiding system design have shown significant enhancements in the positioning accuracy and repeatability of the selected flexible guiding system. Preliminary evaluation of the linear actuator design, including a detailed tolerance analyses, has shown the suitability of this satellite roller screw based mechanism for the actuation of the tested flexible guiding system and compliant connection. The presented development and preliminary testing of the high-precision long-stroke Slit Change Mechanism for the SPICE Instrument are considered fully successful such that future tests considering the full Slit Change Mechanism can be performed, with the gained confidence, directly on a Qualification Model. The selected linear Slit Change Mechanism design concept, consisting of a flexible guiding system driven by a hermetically sealed linear drive mechanism, is considered validated for the specific application of the SPICE instrument, with great potential for other special applications where contamination and high precision positioning are dominant design drivers.

  17. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm.

    PubMed

    Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C

    2005-10-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  18. An automatic optimum number of well-distributed ground control lines selection procedure based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram

    2018-05-01

    The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic errors. These results also demonstrate the high potential of linear features as reliable control features to reach sub-pixel accuracy in registration applications.

  19. Predictor Combination in Binary Decision-Making Situations

    ERIC Educational Resources Information Center

    McGrath, Robert E.

    2008-01-01

    Professional psychologists are often confronted with the task of making binary decisions about individuals, such as predictions about future behavior or employee selection. Test users familiar with linear models and Bayes's theorem are likely to assume that the accuracy of decisions is consistently improved by combination of outcomes across valid…

  20. Spectroscopic Diagnosis of Arsenic Contamination in Agricultural Soils

    PubMed Central

    Shi, Tiezhu; Liu, Huizeng; Chen, Yiyun; Fei, Teng; Wang, Junjie; Wu, Guofeng

    2017-01-01

    This study investigated the abilities of pre-processing, feature selection and machine-learning methods for the spectroscopic diagnosis of soil arsenic contamination. The spectral data were pre-processed by using Savitzky-Golay smoothing, first and second derivatives, multiplicative scatter correction, standard normal variate, and mean centering. Principle component analysis (PCA) and the RELIEF algorithm were used to extract spectral features. Machine-learning methods, including random forests (RF), artificial neural network (ANN), radial basis function- and linear function- based support vector machine (RBF- and LF-SVM) were employed for establishing diagnosis models. The model accuracies were evaluated and compared by using overall accuracies (OAs). The statistical significance of the difference between models was evaluated by using McNemar’s test (Z value). The results showed that the OAs varied with the different combinations of pre-processing, feature selection, and classification methods. Feature selection methods could improve the modeling efficiencies and diagnosis accuracies, and RELIEF often outperformed PCA. The optimal models established by RF (OA = 86%), ANN (OA = 89%), RBF- (OA = 89%) and LF-SVM (OA = 87%) had no statistical difference in diagnosis accuracies (Z < 1.96, p < 0.05). These results indicated that it was feasible to diagnose soil arsenic contamination using reflectance spectroscopy. The appropriate combination of multivariate methods was important to improve diagnosis accuracies. PMID:28471412

  1. Online detecting system of roller wear based on laser-linear array CCD technology

    NASA Astrophysics Data System (ADS)

    Guo, Yuan

    2010-10-01

    Roller is an important metallurgy tool in the rolling mill. And the surface of a roller affects the quantity of the rolling product directly. After using a period of time, roller must be repaired or replaced. Examining the profile of a working roller between the intervals of rolling is called online detecting for roller wear. The study of online detecting roller wear is very important for selecting the grinding time in reason, reducing the exchanging times of rollers, improving the quality of the product and realizing online grinding rollers. By applying the laser-linear array CCD detective technology, a method for online non-touch detecting roller wear was brought forward. The principle, composition and the operation process of the linear array CCD detecting system were expatiated. And an error compensation algorithm is exactly calculated to offset the shift of the roller axis in this measurement system. So the stability and the accuracy were improved remarkably. The experiment proves that the accuracy of the detecting system reaches to the demand of practical production process. It can provide a new method of high speed and high accuracy online detecting for roller wear.

  2. Optimal subset selection of primary sequence features using the genetic algorithm for thermophilic proteins identification.

    PubMed

    Wang, LiQiang; Li, CuiFeng

    2014-10-01

    A genetic algorithm (GA) coupled with multiple linear regression (MLR) was used to extract useful features from amino acids and g-gap dipeptides for distinguishing between thermophilic and non-thermophilic proteins. The method was trained by a benchmark dataset of 915 thermophilic and 793 non-thermophilic proteins. The method reached an overall accuracy of 95.4 % in a Jackknife test using nine amino acids, 38 0-gap dipeptides and 29 1-gap dipeptides. The accuracy as a function of protein size ranged between 85.8 and 96.9 %. The overall accuracies of three independent tests were 93, 93.4 and 91.8 %. The observed results of detecting thermophilic proteins suggest that the GA-MLR approach described herein should be a powerful method for selecting features that describe thermostabile machines and be an aid in the design of more stable proteins.

  3. Comparison of Models and Whole-Genome Profiling Approaches for Genomic-Enabled Prediction of Septoria Tritici Blotch, Stagonospora Nodorum Blotch, and Tan Spot Resistance in Wheat.

    PubMed

    Juliana, Philomin; Singh, Ravi P; Singh, Pawan K; Crossa, Jose; Rutkoski, Jessica E; Poland, Jesse A; Bergstrom, Gary C; Sorrells, Mark E

    2017-07-01

    The leaf spotting diseases in wheat that include Septoria tritici blotch (STB) caused by , Stagonospora nodorum blotch (SNB) caused by , and tan spot (TS) caused by pose challenges to breeding programs in selecting for resistance. A promising approach that could enable selection prior to phenotyping is genomic selection that uses genome-wide markers to estimate breeding values (BVs) for quantitative traits. To evaluate this approach for seedling and/or adult plant resistance (APR) to STB, SNB, and TS, we compared the predictive ability of least-squares (LS) approach with genomic-enabled prediction models including genomic best linear unbiased predictor (GBLUP), Bayesian ridge regression (BRR), Bayes A (BA), Bayes B (BB), Bayes Cπ (BC), Bayesian least absolute shrinkage and selection operator (BL), and reproducing kernel Hilbert spaces markers (RKHS-M), a pedigree-based model (RKHS-P) and RKHS markers and pedigree (RKHS-MP). We observed that LS gave the lowest prediction accuracies and RKHS-MP, the highest. The genomic-enabled prediction models and RKHS-P gave similar accuracies. The increase in accuracy using genomic prediction models over LS was 48%. The mean genomic prediction accuracies were 0.45 for STB (APR), 0.55 for SNB (seedling), 0.66 for TS (seedling) and 0.48 for TS (APR). We also compared markers from two whole-genome profiling approaches: genotyping by sequencing (GBS) and diversity arrays technology sequencing (DArTseq) for prediction. While, GBS markers performed slightly better than DArTseq, combining markers from the two approaches did not improve accuracies. We conclude that implementing GS in breeding for these diseases would help to achieve higher accuracies and rapid gains from selection. Copyright © 2017 Crop Science Society of America.

  4. Depth Of Modulation And Spot Size Selection In Bar-Code Laser Scanners

    NASA Astrophysics Data System (ADS)

    Barkan, Eric; Swartz, Jerome

    1982-04-01

    Many optical and electronic considerations enter into the selection of optical spot size in flying spot laser scanners of the type used in modern industrial and commerical environments. These include: the scale of the symbols to be read, optical background noise present in the symbol substrate, and factors relating to the characteristics of the signal processor. Many 'front ends' consist of a linear signal conditioner followed by nonlinear conditioning and digitizing circuitry. Although the nonlinear portions of the circuit can be difficult to characterize mathematically, it is frequently possible to at least give a minimum depth of modulation measure to yield a worst-case guarantee of adequate performance with respect to digitization accuracy. Depth of modulation actually delivered to the nonlinear circuitry will depend on scale, contrast, and noise content of the scanned symbol, as well as the characteristics of the linear conditioning circuitry (eg. transfer function and electronic noise). Time and frequency domain techniques are applied in order to estimate the effects of these factors in selecting a spot size for a given system environment. Results obtained include estimates of the effects of the linear front end transfer function on effective spot size and asymmetries which can affect digitization accuracy. Plots of convolution-computed modulation patterns and other important system properties are presented. Considerations are limited primarily to Gaussian spot profiles but also apply to more general cases. Attention is paid to realistic symbol models and to implications with respect to printing tolerances.

  5. Post-mortem prediction of primal and selected retail cut weights of New Zealand lamb from carcass and animal characteristics.

    PubMed

    Ngo, L; Ho, H; Hunter, P; Quinn, K; Thomson, A; Pearson, G

    2016-02-01

    Post-mortem measurements (cold weight, grade and external carcass linear dimensions) as well as live animal data (age, breed, sex) were used to predict ovine primal and retail cut weights for 792 lamb carcases. Significant levels of variance could be explained using these predictors. The predictive power of those measurements on primal and retail cut weights was studied by using the results from principal component analysis and the absolute value of the t-statistics of the linear regression model. High prediction accuracy for primal cut weight was achieved (adjusted R(2) up to 0.95), as well as moderate accuracy for key retail cut weight: tenderloins (adj-R(2)=0.60), loin (adj-R(2)=0.62), French rack (adj-R(2)=0.76) and rump (adj-R(2)=0.75). The carcass cold weight had the best predictive power, with the accuracy increasing by around 10% after including the next three most significant variables. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Realization and performance of cryogenic selection mechanisms

    NASA Astrophysics Data System (ADS)

    Aitink-Kroes, Gabby; Bettonvil, Felix; Kragt, Jan; Elswijk, Eddy; Tromp, Niels

    2014-07-01

    Within Infra-Red large wavelength bandwidth instruments the use of mechanisms for selection of observation modes, filters, dispersing elements, pinholes or slits is inevitable. The cryogenic operating environment poses several challenges to these cryogenic mechanisms; like differential thermal shrinkage, physical property change of materials, limited use of lubrication, high feature density, limited space etc. MATISSE the mid-infrared interferometric spectrograph and imager for ESO's VLT interferometer (VLTI) at Paranal in Chile coherently combines the light from 4 telescopes. Within the Cold Optics Bench (COB) of MATISSE two concepts of selection mechanisms can be distinguished based on the same design principles: linear selection mechanisms (sliders) and rotating selection mechanisms (wheels).Both sliders and wheels are used at a temperature of 38 Kelvin. The selection mechanisms have to provide high accuracy and repeatability. The sliders/wheels have integrated tracks that run on small, accurately located, spring loaded precision bearings. Special indents are used for selection of the slider/wheel position. For maximum accuracy/repeatability the guiding/selection system is separated from the actuation in this case a cryogenic actuator inside the cryostat. The paper discusses the detailed design of the mechanisms and the final realization for the MATISSE COB. Limited lifetime and performance tests determine accuracy, warm and cold and the reliability/wear during life of the instrument. The test results and further improvements to the mechanisms are discussed.

  7. Model training across multiple breeding cycles significantly improves genomic prediction accuracy in rye (Secale cereale L.).

    PubMed

    Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin

    2016-11-01

    Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS  = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.

  8. Bayesian whole-genome prediction and genome-wide association analysis with missing genotypes using variable selection

    USDA-ARS?s Scientific Manuscript database

    Single-step Genomic Best Linear Unbiased Predictor (ssGBLUP) has become increasingly popular for whole-genome prediction (WGP) modeling as it utilizes any available pedigree and phenotypes on both genotyped and non-genotyped individuals. The WGP accuracy of ssGBLUP has been demonstrated to be greate...

  9. Alzheimer's Disease Detection by Pseudo Zernike Moment and Linear Regression Classification.

    PubMed

    Wang, Shui-Hua; Du, Sidan; Zhang, Yin; Phillips, Preetha; Wu, Le-Nan; Chen, Xian-Qing; Zhang, Yu-Dong

    2017-01-01

    This study presents an improved method based on "Gorji et al. Neuroscience. 2015" by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Total Variation Diminishing (TVD) schemes of uniform accuracy

    NASA Technical Reports Server (NTRS)

    Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.

    1988-01-01

    Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

  11. Learning epistatic interactions from sequence-activity data to predict enantioselectivity

    NASA Astrophysics Data System (ADS)

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from 50 {× } 5 -fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93 . As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  12. Learning epistatic interactions from sequence-activity data to predict enantioselectivity

    NASA Astrophysics Data System (ADS)

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger ( AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients ( r) from 50 {× } 5-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  13. Learning epistatic interactions from sequence-activity data to predict enantioselectivity.

    PubMed

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from [Formula: see text]-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of [Formula: see text] and [Formula: see text]. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from [Formula: see text] to [Formula: see text] respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  14. Predictive performance of genomic selection methods for carcass traits in Hanwoo beef cattle: impacts of the genetic architecture.

    PubMed

    Mehrban, Hossein; Lee, Deuk Hwan; Moradi, Mohammad Hossein; IlCho, Chung; Naserkheil, Masoumeh; Ibáñez-Escriche, Noelia

    2017-01-04

    Hanwoo beef is known for its marbled fat, tenderness, juiciness and characteristic flavor, as well as for its low cholesterol and high omega 3 fatty acid contents. As yet, there has been no comprehensive investigation to estimate genomic selection accuracy for carcass traits in Hanwoo cattle using dense markers. This study aimed at evaluating the accuracy of alternative statistical methods that differed in assumptions about the underlying genetic model for various carcass traits: backfat thickness (BT), carcass weight (CW), eye muscle area (EMA), and marbling score (MS). Accuracies of direct genomic breeding values (DGV) for carcass traits were estimated by applying fivefold cross-validation to a dataset including 1183 animals and approximately 34,000 single nucleotide polymorphisms (SNPs). Accuracies of BayesC, Bayesian LASSO (BayesL) and genomic best linear unbiased prediction (GBLUP) methods were similar for BT, EMA and MS. However, for CW, DGV accuracy was 7% higher with BayesC than with BayesL and GBLUP. The increased accuracy of BayesC, compared to GBLUP and BayesL, was maintained for CW, regardless of the training sample size, but not for BT, EMA, and MS. Genome-wide association studies detected consistent large effects for SNPs on chromosomes 6 and 14 for CW. The predictive performance of the models depended on the trait analyzed. For CW, the results showed a clear superiority of BayesC compared to GBLUP and BayesL. These findings indicate the importance of using a proper variable selection method for genomic selection of traits and also suggest that the genetic architecture that underlies CW differs from that of the other carcass traits analyzed. Thus, our study provides significant new insights into the carcass traits of Hanwoo cattle.

  15. A comparison of five methods to predict genomic breeding values of dairy bulls from genome-wide SNP markers

    PubMed Central

    2009-01-01

    Background Genomic selection (GS) uses molecular breeding values (MBV) derived from dense markers across the entire genome for selection of young animals. The accuracy of MBV prediction is important for a successful application of GS. Recently, several methods have been proposed to estimate MBV. Initial simulation studies have shown that these methods can accurately predict MBV. In this study we compared the accuracies and possible bias of five different regression methods in an empirical application in dairy cattle. Methods Genotypes of 7,372 SNP and highly accurate EBV of 1,945 dairy bulls were used to predict MBV for protein percentage (PPT) and a profit index (Australian Selection Index, ASI). Marker effects were estimated by least squares regression (FR-LS), Bayesian regression (Bayes-R), random regression best linear unbiased prediction (RR-BLUP), partial least squares regression (PLSR) and nonparametric support vector regression (SVR) in a training set of 1,239 bulls. Accuracy and bias of MBV prediction were calculated from cross-validation of the training set and tested against a test team of 706 young bulls. Results For both traits, FR-LS using a subset of SNP was significantly less accurate than all other methods which used all SNP. Accuracies obtained by Bayes-R, RR-BLUP, PLSR and SVR were very similar for ASI (0.39-0.45) and for PPT (0.55-0.61). Overall, SVR gave the highest accuracy. All methods resulted in biased MBV predictions for ASI, for PPT only RR-BLUP and SVR predictions were unbiased. A significant decrease in accuracy of prediction of ASI was seen in young test cohorts of bulls compared to the accuracy derived from cross-validation of the training set. This reduction was not apparent for PPT. Combining MBV predictions with pedigree based predictions gave 1.05 - 1.34 times higher accuracies compared to predictions based on pedigree alone. Some methods have largely different computational requirements, with PLSR and RR-BLUP requiring the least computing time. Conclusions The four methods which use information from all SNP namely RR-BLUP, Bayes-R, PLSR and SVR generate similar accuracies of MBV prediction for genomic selection, and their use in the selection of immediate future generations in dairy cattle will be comparable. The use of FR-LS in genomic selection is not recommended. PMID:20043835

  16. Accuracy of active chirp linearization for broadband frequency modulated continuous wave ladar.

    PubMed

    Barber, Zeb W; Babbitt, Wm Randall; Kaylor, Brant; Reibel, Randy R; Roos, Peter A

    2010-01-10

    As the bandwidth and linearity of frequency modulated continuous wave chirp ladar increase, the resulting range resolution, precisions, and accuracy are improved correspondingly. An analysis of a very broadband (several THz) and linear (<1 ppm) chirped ladar system based on active chirp linearization is presented. Residual chirp nonlinearity and material dispersion are analyzed as to their effect on the dynamic range, precision, and accuracy of the system. Measurement precision and accuracy approaching the part per billion level is predicted.

  17. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  18. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  19. Extending the accuracy of the SNAP interatomic potential form

    NASA Astrophysics Data System (ADS)

    Wood, Mitchell A.; Thompson, Aidan P.

    2018-06-01

    The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functions in EAM. The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similar to artificial neural network potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting. The quality of this new potential form is measured through a robust cross-validation analysis.

  20. Prediction of Depression in Cancer Patients With Different Classification Criteria, Linear Discriminant Analysis versus Logistic Regression.

    PubMed

    Shayan, Zahra; Mohammad Gholi Mezerji, Naser; Shayan, Leila; Naseri, Parisa

    2015-11-03

    Logistic regression (LR) and linear discriminant analysis (LDA) are two popular statistical models for prediction of group membership. Although they are very similar, the LDA makes more assumptions about the data. When categorical and continuous variables used simultaneously, the optimal choice between the two models is questionable. In most studies, classification error (CE) is used to discriminate between subjects in several groups, but this index is not suitable to predict the accuracy of the outcome. The present study compared LR and LDA models using classification indices. This cross-sectional study selected 243 cancer patients. Sample sets of different sizes (n = 50, 100, 150, 200, 220) were randomly selected and the CE, B, and Q classification indices were calculated by the LR and LDA models. CE revealed the a lack of superiority for one model over the other, but the results showed that LR performed better than LDA for the B and Q indices in all situations. No significant effect for sample size on CE was noted for selection of an optimal model. Assessment of the accuracy of prediction of real data indicated that the B and Q indices are appropriate for selection of an optimal model. The results of this study showed that LR performs better in some cases and LDA in others when based on CE. The CE index is not appropriate for classification, although the B and Q indices performed better and offered more efficient criteria for comparison and discrimination between groups.

  1. High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization

    DTIC Science & Technology

    1992-05-01

    High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization A Thesis Presented by Louis Joseph PoehIman, Captain, USAF B.S., U.S. Air...High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization by Louis Joseph Poehlman, Captain, USAF Submitted to the Department of...31 2-4 Attitude Determination and Control System Architecture ................. 33 3-1 Exact Linearization Using Nonlinear Feedback

  2. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    NASA Astrophysics Data System (ADS)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.

  3. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    PubMed Central

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883

  4. Efficiency of using first-generation information during second-generation selection: results of computer simulation.

    Treesearch

    T.Z. Ye; K.J.S. Jayawickrama; G.R. Johnson

    2004-01-01

    BLUP (Best linear unbiased prediction) method has been widely used in forest tree improvement programs. Since one of the properties of BLUP is that related individuals contribute to the predictions of each other, it seems logical that integrating data from all generations and from all populations would improve both the precision and accuracy in predicting genetic...

  5. The importance of information on relatives for the prediction of genomic breeding values and the implications for the makeup of reference data sets in livestock breeding schemes.

    PubMed

    Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J

    2012-02-09

    The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.

  6. Development and Validation of High-performance Thin Layer Chromatographic Method for Ursolic Acid in Malus domestica Peel

    PubMed Central

    Nikam, P. H.; Kareparamban, J. A.; Jadhav, A. P.; Kadam, V. J.

    2013-01-01

    Ursolic acid, a pentacyclic triterpenoid possess a wide range of pharmacological activities. It shows hypoglycemic, antiandrogenic, antibacterial, antiinflammatory, antioxidant, diuretic and cynogenic activity. It is commonly present in plants especially coating of leaves and fruits, such as apple fruit, vinca leaves, rosemary leaves, and eucalyptus leaves. A simple high-performance thin layer chromatographic method has been developed for the quantification of ursolic acid from apple peel (Malus domestica). The samples dissolved in methanol and linear ascending development was carried out in twin trough glass chamber. The mobile phase was selected as toluene:ethyl acetate:glacial acetic acid (70:30:2). The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.9982 in the concentration range 0.2-7 μg/spot with respect to peak area. According to the ICH guidelines the method was validated for linearity, accuracy, precision, and robustness. Statistical analysis of the data showed that the method is reproducible and selective for the estimation of ursolic acid. PMID:24302805

  7. Wavelength selection-based nonlinear calibration for transcutaneous blood glucose sensing using Raman spectroscopy

    PubMed Central

    Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.

    2011-01-01

    While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future. PMID:21895336

  8. Genetic Programming Transforms in Linear Regression Situations

    NASA Astrophysics Data System (ADS)

    Castillo, Flor; Kordon, Arthur; Villa, Carlos

    The chapter summarizes the use of Genetic Programming (GP) inMultiple Linear Regression (MLR) to address multicollinearity and Lack of Fit (LOF). The basis of the proposed method is applying appropriate input transforms (model respecification) that deal with these issues while preserving the information content of the original variables. The transforms are selected from symbolic regression models with optimal trade-off between accuracy of prediction and expressional complexity, generated by multiobjective Pareto-front GP. The chapter includes a comparative study of the GP-generated transforms with Ridge Regression, a variant of ordinary Multiple Linear Regression, which has been a useful and commonly employed approach for reducing multicollinearity. The advantages of GP-generated model respecification are clearly defined and demonstrated. Some recommendations for transforms selection are given as well. The application benefits of the proposed approach are illustrated with a real industrial application in one of the broadest empirical modeling areas in manufacturing - robust inferential sensors. The chapter contributes to increasing the awareness of the potential of GP in statistical model building by MLR.

  9. Modeling of the thermal physical process and study on the reliability of linear energy density for selective laser melting

    NASA Astrophysics Data System (ADS)

    Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu

    2018-06-01

    A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.

  10. Vehicle systems and payload requirements evaluation. [computer programs for identifying launch vehicle system requirements

    NASA Technical Reports Server (NTRS)

    Rea, F. G.; Pittenger, J. L.; Conlon, R. J.; Allen, J. D.

    1975-01-01

    Techniques developed for identifying launch vehicle system requirements for NASA automated space missions are discussed. Emphasis is placed on development of computer programs and investigation of astrionics for OSS missions and Scout. The Earth Orbit Mission Program - 1 which performs linear error analysis of launch vehicle dispersions for both vehicle and navigation system factors is described along with the Interactive Graphic Orbit Selection program which allows the user to select orbits which satisfy mission requirements and to evaluate the necessary injection accuracy.

  11. Linear response approach to active Brownian particles in time-varying activity fields

    NASA Astrophysics Data System (ADS)

    Merlitz, Holger; Vuijk, Hidde D.; Brader, Joseph; Sharma, Abhinav; Sommer, Jens-Uwe

    2018-05-01

    In a theoretical and simulation study, active Brownian particles (ABPs) in three-dimensional bulk systems are exposed to time-varying sinusoidal activity waves that are running through the system. A linear response (Green-Kubo) formalism is applied to derive fully analytical expressions for the torque-free polarization profiles of non-interacting particles. The activity waves induce fluxes that strongly depend on the particle size and may be employed to de-mix mixtures of ABPs or to drive the particles into selected areas of the system. Three-dimensional Langevin dynamics simulations are carried out to verify the accuracy of the linear response formalism, which is shown to work best when the particles are small (i.e., highly Brownian) or operating at low activity levels.

  12. Automated mapping of impervious surfaces in urban and suburban areas: Linear spectral unmixing of high spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Yang, Jian; He, Yuhong

    2017-02-01

    Quantifying impervious surfaces in urban and suburban areas is a key step toward a sustainable urban planning and management strategy. With the availability of fine-scale remote sensing imagery, automated mapping of impervious surfaces has attracted growing attention. However, the vast majority of existing studies have selected pixel-based and object-based methods for impervious surface mapping, with few adopting sub-pixel analysis of high spatial resolution imagery. This research makes use of a vegetation-bright impervious-dark impervious linear spectral mixture model to characterize urban and suburban surface components. A WorldView-3 image acquired on May 9th, 2015 is analyzed for its potential in automated unmixing of meaningful surface materials for two urban subsets and one suburban subset in Toronto, ON, Canada. Given the wide distribution of shadows in urban areas, the linear spectral unmixing is implemented in non-shadowed and shadowed areas separately for the two urban subsets. The results indicate that the accuracy of impervious surface mapping in suburban areas reaches up to 86.99%, much higher than the accuracies in urban areas (80.03% and 79.67%). Despite its merits in mapping accuracy and automation, the application of our proposed vegetation-bright impervious-dark impervious model to map impervious surfaces is limited due to the absence of soil component. To further extend the operational transferability of our proposed method, especially for the areas where plenty of bare soils exist during urbanization or reclamation, it is still of great necessity to mask out bare soils by automated classification prior to the implementation of linear spectral unmixing.

  13. Features selection and classification to estimate elbow movements

    NASA Astrophysics Data System (ADS)

    Rubiano, A.; Ramírez, J. L.; El Korso, M. N.; Jouandeau, N.; Gallimard, L.; Polit, O.

    2015-11-01

    In this paper, we propose a novel method to estimate the elbow motion, through the features extracted from electromyography (EMG) signals. The features values are normalized and then compared to identify potential relationships between the EMG signal and the kinematic information as angle and angular velocity. We propose and implement a method to select the best set of features, maximizing the distance between the features that correspond to flexion and extension movements. Finally, we test the selected features as inputs to a non-linear support vector machine in the presence of non-idealistic conditions, obtaining an accuracy of 99.79% in the motion estimation results.

  14. An efficient implementation of a high-order filter for a cubed-sphere spectral element model

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Gyu; Cheong, Hyeong-Bin

    2017-03-01

    A parallel-scalable, isotropic, scale-selective spatial filter was developed for the cubed-sphere spectral element model on the sphere. The filter equation is a high-order elliptic (Helmholtz) equation based on the spherical Laplacian operator, which is transformed into cubed-sphere local coordinates. The Laplacian operator is discretized on the computational domain, i.e., on each cell, by the spectral element method with Gauss-Lobatto Lagrange interpolating polynomials (GLLIPs) as the orthogonal basis functions. On the global domain, the discrete filter equation yielded a linear system represented by a highly sparse matrix. The density of this matrix increases quadratically (linearly) with the order of GLLIP (order of the filter), and the linear system is solved in only O (Ng) operations, where Ng is the total number of grid points. The solution, obtained by a row reduction method, demonstrated the typical accuracy and convergence rate of the cubed-sphere spectral element method. To achieve computational efficiency on parallel computers, the linear system was treated by an inverse matrix method (a sparse matrix-vector multiplication). The density of the inverse matrix was lowered to only a few times of the original sparse matrix without degrading the accuracy of the solution. For better computational efficiency, a local-domain high-order filter was introduced: The filter equation is applied to multiple cells, and then the central cell was only used to reconstruct the filtered field. The parallel efficiency of applying the inverse matrix method to the global- and local-domain filter was evaluated by the scalability on a distributed-memory parallel computer. The scale-selective performance of the filter was demonstrated on Earth topography. The usefulness of the filter as a hyper-viscosity for the vorticity equation was also demonstrated.

  15. Evaluation of empirical rule of linearly correlated peptide selection (ERLPS) for proteotypic peptide-based quantitative proteomics.

    PubMed

    Liu, Kehui; Zhang, Jiyang; Fu, Bin; Xie, Hongwei; Wang, Yingchun; Qian, Xiaohong

    2014-07-01

    Precise protein quantification is essential in comparative proteomics. Currently, quantification bias is inevitable when using proteotypic peptide-based quantitative proteomics strategy for the differences in peptides measurability. To improve quantification accuracy, we proposed an "empirical rule for linearly correlated peptide selection (ERLPS)" in quantitative proteomics in our previous work. However, a systematic evaluation on general application of ERLPS in quantitative proteomics under diverse experimental conditions needs to be conducted. In this study, the practice workflow of ERLPS was explicitly illustrated; different experimental variables, such as, different MS systems, sample complexities, sample preparations, elution gradients, matrix effects, loading amounts, and other factors were comprehensively investigated to evaluate the applicability, reproducibility, and transferability of ERPLS. The results demonstrated that ERLPS was highly reproducible and transferable within appropriate loading amounts and linearly correlated response peptides should be selected for each specific experiment. ERLPS was used to proteome samples from yeast to mouse and human, and in quantitative methods from label-free to O18/O16-labeled and SILAC analysis, and enabled accurate measurements for all proteotypic peptide-based quantitative proteomics over a large dynamic range. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    PubMed

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.

  17. A comparison of regression methods for model selection in individual-based landscape genetic analysis.

    PubMed

    Shirk, Andrew J; Landguth, Erin L; Cushman, Samuel A

    2018-01-01

    Anthropogenic migration barriers fragment many populations and limit the ability of species to respond to climate-induced biome shifts. Conservation actions designed to conserve habitat connectivity and mitigate barriers are needed to unite fragmented populations into larger, more viable metapopulations, and to allow species to track their climate envelope over time. Landscape genetic analysis provides an empirical means to infer landscape factors influencing gene flow and thereby inform such conservation actions. However, there are currently many methods available for model selection in landscape genetics, and considerable uncertainty as to which provide the greatest accuracy in identifying the true landscape model influencing gene flow among competing alternative hypotheses. In this study, we used population genetic simulations to evaluate the performance of seven regression-based model selection methods on a broad array of landscapes that varied by the number and type of variables contributing to resistance, the magnitude and cohesion of resistance, as well as the functional relationship between variables and resistance. We also assessed the effect of transformations designed to linearize the relationship between genetic and landscape distances. We found that linear mixed effects models had the highest accuracy in every way we evaluated model performance; however, other methods also performed well in many circumstances, particularly when landscape resistance was high and the correlation among competing hypotheses was limited. Our results provide guidance for which regression-based model selection methods provide the most accurate inferences in landscape genetic analysis and thereby best inform connectivity conservation actions. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  18. An improved method of early diagnosis of smoking-induced respiratory changes using machine learning algorithms.

    PubMed

    Amaral, Jorge L M; Lopes, Agnaldo J; Jansen, José M; Faria, Alvaro C D; Melo, Pedro L

    2013-12-01

    The purpose of this study was to develop an automatic classifier to increase the accuracy of the forced oscillation technique (FOT) for diagnosing early respiratory abnormalities in smoking patients. The data consisted of FOT parameters obtained from 56 volunteers, 28 healthy and 28 smokers with low tobacco consumption. Many supervised learning techniques were investigated, including logistic linear classifiers, k nearest neighbor (KNN), neural networks and support vector machines (SVM). To evaluate performance, the ROC curve of the most accurate parameter was established as baseline. To determine the best input features and classifier parameters, we used genetic algorithms and a 10-fold cross-validation using the average area under the ROC curve (AUC). In the first experiment, the original FOT parameters were used as input. We observed a significant improvement in accuracy (KNN=0.89 and SVM=0.87) compared with the baseline (0.77). The second experiment performed a feature selection on the original FOT parameters. This selection did not cause any significant improvement in accuracy, but it was useful in identifying more adequate FOT parameters. In the third experiment, we performed a feature selection on the cross products of the FOT parameters. This selection resulted in a further increase in AUC (KNN=SVM=0.91), which allows for high diagnostic accuracy. In conclusion, machine learning classifiers can help identify early smoking-induced respiratory alterations. The use of FOT cross products and the search for the best features and classifier parameters can markedly improve the performance of machine learning classifiers. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Genomic selection in sugar beet breeding populations.

    PubMed

    Würschum, Tobias; Reif, Jochen C; Kraft, Thomas; Janssen, Geert; Zhao, Yusheng

    2013-09-18

    Genomic selection exploits dense genome-wide marker data to predict breeding values. In this study we used a large sugar beet population of 924 lines representing different germplasm types present in breeding populations: unselected segregating families and diverse lines from more advanced stages of selection. All lines have been intensively phenotyped in multi-location field trials for six agronomically important traits and genotyped with 677 SNP markers. We used ridge regression best linear unbiased prediction in combination with fivefold cross-validation and obtained high prediction accuracies for all except one trait. In addition, we investigated whether a calibration developed based on a training population composed of diverse lines is suited to predict the phenotypic performance within families. Our results show that the prediction accuracy is lower than that obtained within the diverse set of lines, but comparable to that obtained by cross-validation within the respective families. The results presented in this study suggest that a training population derived from intensively phenotyped and genotyped diverse lines from a breeding program does hold potential to build up robust calibration models for genomic selection. Taken together, our results indicate that genomic selection is a valuable tool and can thus complement the genomics toolbox in sugar beet breeding.

  20. A bench-top hyperspectral imaging system to classify beef from Nellore cattle based on tenderness

    NASA Astrophysics Data System (ADS)

    Nubiato, Keni Eduardo Zanoni; Mazon, Madeline Rezende; Antonelo, Daniel Silva; Calkins, Chris R.; Naganathan, Govindarajan Konda; Subbiah, Jeyamkondan; da Luz e Silva, Saulo

    2018-03-01

    The aim of this study was to evaluate the accuracy of classification of Nellore beef aged for 0, 7, 14, or 21 days and classification based on tenderness and aging period using a bench-top hyperspectral imaging system. A hyperspectral imaging system (λ = 928-2524 nm) was used to collect hyperspectral images of the Longissimus thoracis et lumborum (aging n = 376 and tenderness n = 345) of Nellore cattle. The image processing steps included selection of region of interest, extraction of spectra, and indentification and evalution of selected wavelengths for classification. Six linear discriminant models were developed to classify samples based on tenderness and aging period. The model using the first derivative of partial absorbance spectra (give wavelength range spectra) was able to classify steaks based on the tenderness with an overall accuracy of 89.8%. The model using the first derivative of full absorbance spectra was able to classify steaks based on aging period with an overall accuracy of 84.8%. The results demonstrate that the HIS may be a viable technology for classifying beef based on tenderness and aging period.

  1. Variations in the Intragene Methylation Profiles Hallmark Induced Pluripotency

    PubMed Central

    Druzhkov, Pavel; Zolotykh, Nikolay; Meyerov, Iosif; Alsaedi, Ahmed; Shutova, Maria; Ivanchenko, Mikhail; Zaikin, Alexey

    2015-01-01

    We demonstrate the potential of differentiating embryonic and induced pluripotent stem cells by the regularized linear and decision tree machine learning classification algorithms, based on a number of intragene methylation measures. The resulting average accuracy of classification has been proven to be above 95%, which overcomes the earlier achievements. We propose a constructive and transparent method of feature selection based on classifier accuracy. Enrichment analysis reveals statistically meaningful presence of stemness group and cancer discriminating genes among the selected best classifying features. These findings stimulate the further research on the functional consequences of these differences in methylation patterns. The presented approach can be broadly used to discriminate the cells of different phenotype or in different state by their methylation profiles, identify groups of genes constituting multifeature classifiers, and assess enrichment of these groups by the sets of genes with a functionality of interest. PMID:26618180

  2. Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)

    2007-01-01

    A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.

  3. The analysis of the accuracy of spatial models using photogrammetric software: Agisoft Photoscan and Pix4D

    NASA Astrophysics Data System (ADS)

    Barbasiewicz, Adrianna; Widerski, Tadeusz; Daliga, Karol

    2018-01-01

    This article was created as a result of research conducted within the master thesis. The purpose of the measurements was to analyze the accuracy of the positioning of points by computer programs. Selected software was a specialized computer software dedicated to photogrammetric work. For comparative purposes it was decided to use tools with similar functionality. As the basic parameters that affect the results selected the resolution of the photos on which the key points were searched. In order to determine the location of the determined points, it was decided to follow the photogrammetric resection rule. In order to automate the measurement, the measurement session planning was omitted. The coordinates of the points collected by the tachymetric measure were used as a reference system. The resulting deviations and linear displacements oscillate in millimeters. The visual aspects of the cloud points have also been briefly analyzed.

  4. Genomic prediction of reproduction traits for Merino sheep.

    PubMed

    Bolormaa, S; Brown, D J; Swan, A A; van der Werf, J H J; Hayes, B J; Daetwyler, H D

    2017-06-01

    Economically important reproduction traits in sheep, such as number of lambs weaned and litter size, are expressed only in females and later in life after most selection decisions are made, which makes them ideal candidates for genomic selection. Accurate genomic predictions would lead to greater genetic gain for these traits by enabling accurate selection of young rams with high genetic merit. The aim of this study was to design and evaluate the accuracy of a genomic prediction method for female reproduction in sheep using daughter trait deviations (DTD) for sires and ewe phenotypes (when individual ewes were genotyped) for three reproduction traits: number of lambs born (NLB), litter size (LSIZE) and number of lambs weaned. Genomic best linear unbiased prediction (GBLUP), BayesR and pedigree BLUP analyses of the three reproduction traits measured on 5340 sheep (4503 ewes and 837 sires) with real and imputed genotypes for 510 174 SNPs were performed. The prediction of breeding values using both sire and ewe trait records was validated in Merino sheep. Prediction accuracy was evaluated by across sire family and random cross-validations. Accuracies of genomic estimated breeding values (GEBVs) were assessed as the mean Pearson correlation adjusted by the accuracy of the input phenotypes. The addition of sire DTD into the prediction analysis resulted in higher accuracies compared with using only ewe records in genomic predictions or pedigree BLUP. Using GBLUP, the average accuracy based on the combined records (ewes and sire DTD) was 0.43 across traits, but the accuracies varied by trait and type of cross-validations. The accuracies of GEBVs from random cross-validations (range 0.17-0.61) were higher than were those from sire family cross-validations (range 0.00-0.51). The GEBV accuracies of 0.41-0.54 for NLB and LSIZE based on the combined records were amongst the highest in the study. Although BayesR was not significantly different from GBLUP in prediction accuracy, it identified several candidate genes which are known to be associated with NLB and LSIZE. The approach provides a way to make use of all data available in genomic prediction for traits that have limited recording. © 2017 Stichting International Foundation for Animal Genetics.

  5. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification

    PubMed Central

    Wen, Tingxi; Zhang, Zhongnan

    2017-01-01

    Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789

  6. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  7. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  8. Advances in metaheuristics for gene selection and classification of microarray data.

    PubMed

    Duval, Béatrice; Hao, Jin-Kao

    2010-01-01

    Gene selection aims at identifying a (small) subset of informative genes from the initial data in order to obtain high predictive accuracy for classification. Gene selection can be considered as a combinatorial search problem and thus be conveniently handled with optimization methods. In this article, we summarize some recent developments of using metaheuristic-based methods within an embedded approach for gene selection. In particular, we put forward the importance and usefulness of integrating problem-specific knowledge into the search operators of such a method. To illustrate the point, we explain how ranking coefficients of a linear classifier such as support vector machine (SVM) can be profitably used to reinforce the search efficiency of Local Search and Evolutionary Search metaheuristic algorithms for gene selection and classification.

  9. Reduced kernel recursive least squares algorithm for aero-engine degradation prediction

    NASA Astrophysics Data System (ADS)

    Zhou, Haowen; Huang, Jinquan; Lu, Feng

    2017-10-01

    Kernel adaptive filters (KAFs) generate a linear growing radial basis function (RBF) network with the number of training samples, thereby lacking sparseness. To deal with this drawback, traditional sparsification techniques select a subset of original training data based on a certain criterion to train the network and discard the redundant data directly. Although these methods curb the growth of the network effectively, it should be noted that information conveyed by these redundant samples is omitted, which may lead to accuracy degradation. In this paper, we present a novel online sparsification method which requires much less training time without sacrificing the accuracy performance. Specifically, a reduced kernel recursive least squares (RKRLS) algorithm is developed based on the reduced technique and the linear independency. Unlike conventional methods, our novel methodology employs these redundant data to update the coefficients of the existing network. Due to the effective utilization of the redundant data, the novel algorithm achieves a better accuracy performance, although the network size is significantly reduced. Experiments on time series prediction and online regression demonstrate that RKRLS algorithm requires much less computational consumption and maintains the satisfactory accuracy performance. Finally, we propose an enhanced multi-sensor prognostic model based on RKRLS and Hidden Markov Model (HMM) for remaining useful life (RUL) estimation. A case study in a turbofan degradation dataset is performed to evaluate the performance of the novel prognostic approach.

  10. Evaluation of Piecewise Polynomial Equations for Two Types of Thermocouples

    PubMed Central

    Chen, Andrew; Chen, Chiachung

    2013-01-01

    Thermocouples are the most frequently used sensors for temperature measurement because of their wide applicability, long-term stability and high reliability. However, one of the major utilization problems is the linearization of the transfer relation between temperature and output voltage of thermocouples. The linear calibration equation and its modules could be improved by using regression analysis to help solve this problem. In this study, two types of thermocouple and five temperature ranges were selected to evaluate the fitting agreement of different-order polynomial equations. Two quantitative criteria, the average of the absolute error values |e|ave and the standard deviation of calibration equation estd, were used to evaluate the accuracy and precision of these calibrations equations. The optimal order of polynomial equations differed with the temperature range. The accuracy and precision of the calibration equation could be improved significantly with an adequate higher degree polynomial equation. The technique could be applied with hardware modules to serve as an intelligent sensor for temperature measurement. PMID:24351627

  11. Extending the accuracy of the SNAP interatomic potential form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Mitchell A.; Thompson, Aidan P.

    The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functionsmore » in EAM. It is also argued that the quadratic SNAP form is a special case of an artificial neural network (ANN). The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similarly to ANN potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting, as measured by cross-validation analysis.« less

  12. Extending the accuracy of the SNAP interatomic potential form

    DOE PAGES

    Wood, Mitchell A.; Thompson, Aidan P.

    2018-03-28

    The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functionsmore » in EAM. It is also argued that the quadratic SNAP form is a special case of an artificial neural network (ANN). The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similarly to ANN potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting, as measured by cross-validation analysis.« less

  13. Improved age determination of blood and teeth samples using a selected set of DNA methylation markers

    PubMed Central

    Kamalandua, Aubeline

    2015-01-01

    Age estimation from DNA methylation markers has seen an exponential growth of interest, not in the least from forensic scientists. The current published assays, however, can still be improved by lowering the number of markers in the assay and by providing more accurate models to predict chronological age. From the published literature we selected 4 age-associated genes (ASPA, PDE4C, ELOVL2, and EDARADD) and determined CpG methylation levels from 206 blood samples of both deceased and living individuals (age range: 0–91 years). This data was subsequently used to compare prediction accuracy with both linear and non-linear regression models. A quadratic regression model in which the methylation levels of ELOVL2 were squared showed the highest accuracy with a Mean Absolute Deviation (MAD) between chronological age and predicted age of 3.75 years and an adjusted R2 of 0.95. No difference in accuracy was observed for samples obtained either from living and deceased individuals or between the 2 genders. In addition, 29 teeth from different individuals (age range: 19–70 years) were analyzed using the same set of markers resulting in a MAD of 4.86 years and an adjusted R2 of 0.74. Cross validation of the results obtained from blood samples demonstrated the robustness and reproducibility of the assay. In conclusion, the set of 4 CpG DNA methylation markers is capable of producing highly accurate age predictions for blood samples from deceased and living individuals PMID:26280308

  14. Classification effects of real and imaginary movement selective attention tasks on a P300-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Salvaris, Mathew; Sepulveda, Francisco

    2010-10-01

    Brain-computer interfaces (BCIs) rely on various electroencephalography methodologies that allow the user to convey their desired control to the machine. Common approaches include the use of event-related potentials (ERPs) such as the P300 and modulation of the beta and mu rhythms. All of these methods have their benefits and drawbacks. In this paper, three different selective attention tasks were tested in conjunction with a P300-based protocol (i.e. the standard counting of target stimuli as well as the conduction of real and imaginary movements in sync with the target stimuli). The three tasks were performed by a total of 10 participants, with the majority (7 out of 10) of the participants having never before participated in imaginary movement BCI experiments. Channels and methods used were optimized for the P300 ERP and no sensory-motor rhythms were explicitly used. The classifier used was a simple Fisher's linear discriminant. Results were encouraging, showing that on average the imaginary movement achieved a P300 versus No-P300 classification accuracy of 84.53%. In comparison, mental counting, the standard selective attention task used in previous studies, achieved 78.9% and real movement 90.3%. Furthermore, multiple trial classification results were recorded and compared, with real movement reaching 99.5% accuracy after four trials (12.8 s), imaginary movement reaching 99.5% accuracy after five trials (16 s) and counting reaching 98.2% accuracy after ten trials (32 s).

  15. Classification effects of real and imaginary movement selective attention tasks on a P300-based brain-computer interface.

    PubMed

    Salvaris, Mathew; Sepulveda, Francisco

    2010-10-01

    Brain-computer interfaces (BCIs) rely on various electroencephalography methodologies that allow the user to convey their desired control to the machine. Common approaches include the use of event-related potentials (ERPs) such as the P300 and modulation of the beta and mu rhythms. All of these methods have their benefits and drawbacks. In this paper, three different selective attention tasks were tested in conjunction with a P300-based protocol (i.e. the standard counting of target stimuli as well as the conduction of real and imaginary movements in sync with the target stimuli). The three tasks were performed by a total of 10 participants, with the majority (7 out of 10) of the participants having never before participated in imaginary movement BCI experiments. Channels and methods used were optimized for the P300 ERP and no sensory-motor rhythms were explicitly used. The classifier used was a simple Fisher's linear discriminant. Results were encouraging, showing that on average the imaginary movement achieved a P300 versus No-P300 classification accuracy of 84.53%. In comparison, mental counting, the standard selective attention task used in previous studies, achieved 78.9% and real movement 90.3%. Furthermore, multiple trial classification results were recorded and compared, with real movement reaching 99.5% accuracy after four trials (12.8 s), imaginary movement reaching 99.5% accuracy after five trials (16 s) and counting reaching 98.2% accuracy after ten trials (32 s).

  16. SVM feature selection based rotation forest ensemble classifiers to improve computer-aided diagnosis of Parkinson disease.

    PubMed

    Ozcift, Akin

    2012-08-01

    Parkinson disease (PD) is an age-related deterioration of certain nerve systems, which affects movement, balance, and muscle control of clients. PD is one of the common diseases which affect 1% of people older than 60 years. A new classification scheme based on support vector machine (SVM) selected features to train rotation forest (RF) ensemble classifiers is presented for improving diagnosis of PD. The dataset contains records of voice measurements from 31 people, 23 with PD and each record in the dataset is defined with 22 features. The diagnosis model first makes use of a linear SVM to select ten most relevant features from 22. As a second step of the classification model, six different classifiers are trained with the subset of features. Subsequently, at the third step, the accuracies of classifiers are improved by the utilization of RF ensemble classification strategy. The results of the experiments are evaluated using three metrics; classification accuracy (ACC), Kappa Error (KE) and Area under the Receiver Operating Characteristic (ROC) Curve (AUC). Performance measures of two base classifiers, i.e. KStar and IBk, demonstrated an apparent increase in PD diagnosis accuracy compared to similar studies in literature. After all, application of RF ensemble classification scheme improved PD diagnosis in 5 of 6 classifiers significantly. We, numerically, obtained about 97% accuracy in RF ensemble of IBk (a K-Nearest Neighbor variant) algorithm, which is a quite high performance for Parkinson disease diagnosis.

  17. [Validation of an in-house method for the determination of zinc in serum: Meeting the requirements of ISO 17025].

    PubMed

    Llorente Ballesteros, M T; Navarro Serrano, I; López Colón, J L

    2015-01-01

    The aim of this report is to propose a scheme for validation of an analytical technique according to ISO 17025. According to ISO 17025, the fundamental parameters tested were: selectivity, calibration model, precision, accuracy, uncertainty of measurement, and analytical interference. A protocol has been developed that has been applied successfully to quantify zinc in serum by atomic absorption spectrometry. It is demonstrated that our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  18. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China.

    PubMed

    Ji, Cuicui; Jia, Yonghong; Gao, Zhihai; Wei, Huaidong; Li, Xiaosong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement.

  19. Nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates of typical desert vegetation in western China

    PubMed Central

    Jia, Yonghong; Gao, Zhihai; Wei, Huaidong

    2017-01-01

    Desert vegetation plays significant roles in securing the ecological integrity of oasis ecosystems in western China. Timely monitoring of photosynthetic/non-photosynthetic desert vegetation cover is necessary to guide management practices on land desertification and research into the mechanisms driving vegetation recession. In this study, nonlinear spectral mixture effects for photosynthetic/non-photosynthetic vegetation cover estimates are investigated through comparing the performance of linear and nonlinear spectral mixture models with different endmembers applied to field spectral measurements of two types of typical desert vegetation, namely, Nitraria shrubs and Haloxylon. The main results were as follows. (1) The correct selection of endmembers is important for improving the accuracy of vegetation cover estimates, and in particular, shadow endmembers cannot be neglected. (2) For both the Nitraria shrubs and Haloxylon, the Kernel-based Nonlinear Spectral Mixture Model (KNSMM) with nonlinear parameters was the best unmixing model. In consideration of the computational complexity and accuracy requirements, the Linear Spectral Mixture Model (LSMM) could be adopted for Nitraria shrubs plots, but this will result in significant errors for the Haloxylon plots since the nonlinear spectral mixture effects were more obvious for this vegetation type. (3) The vegetation canopy structure (planophile or erectophile) determines the strength of the nonlinear spectral mixture effects. Therefore, no matter for Nitraria shrubs or Haloxylon, the non-linear spectral mixing effects between the photosynthetic / non-photosynthetic vegetation and the bare soil do exist, and its strength is dependent on the three-dimensional structure of the vegetation canopy. The choice of linear or nonlinear spectral mixture models is up to the consideration of computational complexity and the accuracy requirement. PMID:29240777

  20. Voltammetric Electronic Tongue and Support Vector Machines for Identification of Selected Features in Mexican Coffee

    PubMed Central

    Domínguez, Rocio Berenice; Moreno-Barón, Laura; Muñoz, Roberto; Gutiérrez, Juan Manuel

    2014-01-01

    This paper describes a new method based on a voltammetric electronic tongue (ET) for the recognition of distinctive features in coffee samples. An ET was directly applied to different samples from the main Mexican coffee regions without any pretreatment before the analysis. The resulting electrochemical information was modeled with two different mathematical tools, namely Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Growing conditions (i.e., organic or non-organic practices and altitude of crops) were considered for a first classification. LDA results showed an average discrimination rate of 88% ± 6.53% while SVM successfully accomplished an overall accuracy of 96.4% ± 3.50% for the same task. A second classification based on geographical origin of samples was carried out. Results showed an overall accuracy of 87.5% ± 7.79% for LDA and a superior performance of 97.5% ± 3.22% for SVM. Given the complexity of coffee samples, the high accuracy percentages achieved by ET coupled with SVM in both classification problems suggested a potential applicability of ET in the assessment of selected coffee features with a simpler and faster methodology along with a null sample pretreatment. In addition, the proposed method can be applied to authentication assessment while improving cost, time and accuracy of the general procedure. PMID:25254303

  1. Voltammetric electronic tongue and support vector machines for identification of selected features in Mexican coffee.

    PubMed

    Domínguez, Rocio Berenice; Moreno-Barón, Laura; Muñoz, Roberto; Gutiérrez, Juan Manuel

    2014-09-24

    This paper describes a new method based on a voltammetric electronic tongue (ET) for the recognition of distinctive features in coffee samples. An ET was directly applied to different samples from the main Mexican coffee regions without any pretreatment before the analysis. The resulting electrochemical information was modeled with two different mathematical tools, namely Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Growing conditions (i.e., organic or non-organic practices and altitude of crops) were considered for a first classification. LDA results showed an average discrimination rate of 88% ± 6.53% while SVM successfully accomplished an overall accuracy of 96.4% ± 3.50% for the same task. A second classification based on geographical origin of samples was carried out. Results showed an overall accuracy of 87.5% ± 7.79% for LDA and a superior performance of 97.5% ± 3.22% for SVM. Given the complexity of coffee samples, the high accuracy percentages achieved by ET coupled with SVM in both classification problems suggested a potential applicability of ET in the assessment of selected coffee features with a simpler and faster methodology along with a null sample pretreatment. In addition, the proposed method can be applied to authentication assessment while improving cost, time and accuracy of the general procedure.

  2. Wavelength dependence of position angle in polarization standards

    NASA Astrophysics Data System (ADS)

    Dolan, J. F.; Tapia, S.

    1986-08-01

    Eleven of the 15 stars on Serkowski's (1974) list of "Standard Stars with Large Interstellar Polarization" were investigated to determine whether the orientation of the plane of their linear polarization showed any dependence on wavelength. Nine of the eleven stars exhibited a statistically significant wavelength dependence of position angle when measured with an accuracy of ≡0°.1 standard deviation. For the majority of these stars, the effect is caused primarily by intrinsic polarization. The calibration of polarimeter position angles in a celestial coordinate frame must evidently be done at the 0°.1 level of accuracy by using only carefully selected standard stars or by using other astronomical or laboratory methods.

  3. Wavelength dependence of position angle in polarization standards. [of stellar systems

    NASA Technical Reports Server (NTRS)

    Dolan, J. F.; Tapia, S.

    1986-01-01

    Eleven of the 15 stars on Serkowski's (1974) list of 'Standard Stars with Large Interstellar Polarization' were investigated to determine whether the orientation of the plane of their linear polarization showed any dependence on wavelength. Nine of the eleven stars exhibited a statistically significant wavelength dependence of position angle when measured with an accuracy of about 0.1 deg standard deviation. For the majority of these stars, the effect is caused primarily by intrinsic polarization. The calibration of polarimeter position angles in a celestial coordinate frame must evidently be done at the 0.1 deg level of accuracy by using only carefully selected standard stars or by using other astronomical or laboratory methods.

  4. Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers

    PubMed Central

    Thompson, Clarissa A.; Opfer, John E.

    2016-01-01

    Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688

  5. Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers.

    PubMed

    Thompson, Clarissa A; Opfer, John E

    2016-01-01

    Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children's representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  6. Integrating environmental covariates and crop modeling into the genomic selection framework to predict genotype by environment interactions.

    PubMed

    Heslot, Nicolas; Akdemir, Deniz; Sorrells, Mark E; Jannink, Jean-Luc

    2014-02-01

    Development of models to predict genotype by environment interactions, in unobserved environments, using environmental covariates, a crop model and genomic selection. Application to a large winter wheat dataset. Genotype by environment interaction (G*E) is one of the key issues when analyzing phenotypes. The use of environment data to model G*E has long been a subject of interest but is limited by the same problems as those addressed by genomic selection methods: a large number of correlated predictors each explaining a small amount of the total variance. In addition, non-linear responses of genotypes to stresses are expected to further complicate the analysis. Using a crop model to derive stress covariates from daily weather data for predicted crop development stages, we propose an extension of the factorial regression model to genomic selection. This model is further extended to the marker level, enabling the modeling of quantitative trait loci (QTL) by environment interaction (Q*E), on a genome-wide scale. A newly developed ensemble method, soft rule fit, was used to improve this model and capture non-linear responses of QTL to stresses. The method is tested using a large winter wheat dataset, representative of the type of data available in a large-scale commercial breeding program. Accuracy in predicting genotype performance in unobserved environments for which weather data were available increased by 11.1% on average and the variability in prediction accuracy decreased by 10.8%. By leveraging agronomic knowledge and the large historical datasets generated by breeding programs, this new model provides insight into the genetic architecture of genotype by environment interactions and could predict genotype performance based on past and future weather scenarios.

  7. Ordinal feature selection for iris and palmprint recognition.

    PubMed

    Sun, Zhenan; Wang, Libin; Tan, Tieniu

    2014-09-01

    Ordinal measures have been demonstrated as an effective feature representation model for iris and palmprint recognition. However, ordinal measures are a general concept of image analysis and numerous variants with different parameter settings, such as location, scale, orientation, and so on, can be derived to construct a huge feature space. This paper proposes a novel optimization formulation for ordinal feature selection with successful applications to both iris and palmprint recognition. The objective function of the proposed feature selection method has two parts, i.e., misclassification error of intra and interclass matching samples and weighted sparsity of ordinal feature descriptors. Therefore, the feature selection aims to achieve an accurate and sparse representation of ordinal measures. And, the optimization subjects to a number of linear inequality constraints, which require that all intra and interclass matching pairs are well separated with a large margin. Ordinal feature selection is formulated as a linear programming (LP) problem so that a solution can be efficiently obtained even on a large-scale feature pool and training database. Extensive experimental results demonstrate that the proposed LP formulation is advantageous over existing feature selection methods, such as mRMR, ReliefF, Boosting, and Lasso for biometric recognition, reporting state-of-the-art accuracy on CASIA and PolyU databases.

  8. Improving accuracies of genomic predictions for drought tolerance in maize by joint modeling of additive and dominance effects in multi-environment trials.

    PubMed

    Dias, Kaio Olímpio Das Graças; Gezan, Salvador Alejandro; Guimarães, Claudia Teixeira; Nazarian, Alireza; da Costa E Silva, Luciano; Parentoni, Sidney Netto; de Oliveira Guimarães, Paulo Evaristo; de Oliveira Anoni, Carina; Pádua, José Maria Villela; de Oliveira Pinto, Marcos; Noda, Roberto Willians; Ribeiro, Carlos Alexandre Gomes; de Magalhães, Jurandir Vieira; Garcia, Antonio Augusto Franco; de Souza, João Cândido; Guimarães, Lauro José Moreira; Pastina, Maria Marta

    2018-07-01

    Breeding for drought tolerance is a challenging task that requires costly, extensive, and precise phenotyping. Genomic selection (GS) can be used to maximize selection efficiency and the genetic gains in maize (Zea mays L.) breeding programs for drought tolerance. Here, we evaluated the accuracy of genomic selection (GS) using additive (A) and additive + dominance (AD) models to predict the performance of untested maize single-cross hybrids for drought tolerance in multi-environment trials. Phenotypic data of five drought tolerance traits were measured in 308 hybrids along eight trials under water-stressed (WS) and well-watered (WW) conditions over two years and two locations in Brazil. Hybrids' genotypes were inferred based on their parents' genotypes (inbred lines) using single-nucleotide polymorphism markers obtained via genotyping-by-sequencing. GS analyses were performed using genomic best linear unbiased prediction by fitting a factor analytic (FA) multiplicative mixed model. Two cross-validation (CV) schemes were tested: CV1 and CV2. The FA framework allowed for investigating the stability of additive and dominance effects across environments, as well as the additive-by-environment and the dominance-by-environment interactions, with interesting applications for parental and hybrid selection. Results showed differences in the predictive accuracy between A and AD models, using both CV1 and CV2, for the five traits in both water conditions. For grain yield (GY) under WS and using CV1, the AD model doubled the predictive accuracy in comparison to the A model. Through CV2, GS models benefit from borrowing information of correlated trials, resulting in an increase of 40% and 9% in the predictive accuracy of GY under WS for A and AD models, respectively. These results highlight the importance of multi-environment trial analyses using GS models that incorporate additive and dominance effects for genomic predictions of GY under drought in maize single-cross hybrids.

  9. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture.

    PubMed

    Vallejo, Roger L; Leeds, Timothy D; Gao, Guangtu; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Fragomeni, Breno O; Wiens, Gregory D; Palti, Yniv

    2017-02-01

    Previously, we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative that enables exploitation of within-family genetic variation. We compared three GS models [single-step genomic best linear unbiased prediction (ssGBLUP), weighted ssGBLUP (wssGBLUP), and BayesB] to predict genomic-enabled breeding values (GEBV) for BCWD resistance in a commercial rainbow trout population, and compared the accuracy of GEBV to traditional estimates of breeding values (EBV) from a pedigree-based BLUP (P-BLUP) model. We also assessed the impact of sampling design on the accuracy of GEBV predictions. For these comparisons, we used BCWD survival phenotypes recorded on 7893 fish from 102 families, of which 1473 fish from 50 families had genotypes [57 K single nucleotide polymorphism (SNP) array]. Naïve siblings of the training fish (n = 930 testing fish) were genotyped to predict their GEBV and mated to produce 138 progeny testing families. In the following generation, 9968 progeny were phenotyped to empirically assess the accuracy of GEBV predictions made on their non-phenotyped parents. The accuracy of GEBV from all tested GS models were substantially higher than the P-BLUP model EBV. The highest increase in accuracy relative to the P-BLUP model was achieved with BayesB (97.2 to 108.8%), followed by wssGBLUP at iteration 2 (94.4 to 97.1%) and 3 (88.9 to 91.2%) and ssGBLUP (83.3 to 85.3%). Reducing the training sample size to n = ~1000 had no negative impact on the accuracy (0.67 to 0.72), but with n = ~500 the accuracy dropped to 0.53 to 0.61 if the training and testing fish were full-sibs, and even substantially lower, to 0.22 to 0.25, when they were not full-sibs. Using progeny performance data, we showed that the accuracy of genomic predictions is substantially higher than estimates obtained from the traditional pedigree-based BLUP model for BCWD resistance. Overall, we found that using a much smaller training sample size compared to similar studies in livestock, GS can substantially improve the selection accuracy and genetic gains for this trait in a commercial rainbow trout breeding population.

  10. Modeling of Heat Transfer in Rooms in the Modelica "Buildings" Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Zuo, Wangda; Nouidui, Thierry Stephane

    This paper describes the implementation of the room heat transfer model in the free open-source Modelica \\Buildings" library. The model can be used as a single room or to compose a multizone building model. We discuss how the model is decomposed into submodels for the individual heat transfer phenomena. We also discuss the main physical assumptions. The room model can be parameterized to use different modeling assumptions, leading to linear or non-linear differential algebraic systems of equations. We present numerical experiments that show how these assumptions affect computing time and accuracy for selected cases of the ANSI/ASHRAE Standard 140- 2007more » envelop validation tests.« less

  11. Solution of second order quasi-linear boundary value problems by a wavelet method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lei; Zhou, Youhe; Wang, Jizeng, E-mail: jzwang@lzu.edu.cn

    2015-03-10

    A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can evenmore » reach orders of 5.8.« less

  12. Genomic selection in sugar beet breeding populations

    PubMed Central

    2013-01-01

    Background Genomic selection exploits dense genome-wide marker data to predict breeding values. In this study we used a large sugar beet population of 924 lines representing different germplasm types present in breeding populations: unselected segregating families and diverse lines from more advanced stages of selection. All lines have been intensively phenotyped in multi-location field trials for six agronomically important traits and genotyped with 677 SNP markers. Results We used ridge regression best linear unbiased prediction in combination with fivefold cross-validation and obtained high prediction accuracies for all except one trait. In addition, we investigated whether a calibration developed based on a training population composed of diverse lines is suited to predict the phenotypic performance within families. Our results show that the prediction accuracy is lower than that obtained within the diverse set of lines, but comparable to that obtained by cross-validation within the respective families. Conclusions The results presented in this study suggest that a training population derived from intensively phenotyped and genotyped diverse lines from a breeding program does hold potential to build up robust calibration models for genomic selection. Taken together, our results indicate that genomic selection is a valuable tool and can thus complement the genomics toolbox in sugar beet breeding. PMID:24047500

  13. Synthesis of molecular imprinted polymers for selective extraction of domperidone from human serum using high performance liquid chromatography with fluorescence detection.

    PubMed

    Salehi, Simin; Rasoul-Amini, Sara; Adib, Noushin; Shekarchi, Maryam

    2016-08-01

    In this study a novel method is described for selective quantization of domperidone in biological matrices applying molecular imprinted polymers (MIPs) as a sample clean up procedure using high performance liquid chromatography coupled with a fluorescence detector. MIPs were synthesized with chloroform as the porogen, ethylene glycol dimethacrylate as the crosslinker, methacrylic acid as the monomer, and domperidone as the template molecule. The new imprinted polymer was used as a molecular sorbent for separation of domperidone from serum. Molecular recognition properties, binding capacity and selectivity of MIPs were determined. The results demonstrated exceptional affinity for domperidone in biological fluids. The domperidone analytical method using MIPs was verified according to validation parameters, such as selectivity, linearity (5-80ng/mL, r(2)=0.9977), precision and accuracy (10-40ng/mL, intra-day=1.7-5.1%, inter-day=4.5-5.9%, and accuracy 89.07-98.9%).The limit of detection (LOD) and quantization (LOQ) of domperidone was 0.0279 and 0.092ng/mL, respectively. The simplicity and suitable validation parameters makes this a highly valuable selective bioequivalence method for domperidone analysis in human serum. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Convert a low-cost sensor to a colorimeter using an improved regression method

    NASA Astrophysics Data System (ADS)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  15. Performance of genomic prediction within and across generations in maritime pine.

    PubMed

    Bartholomé, Jérôme; Van Heerwaarden, Joost; Isik, Fikret; Boury, Christophe; Vidal, Marjorie; Plomion, Christophe; Bouffier, Laurent

    2016-08-11

    Genomic selection (GS) is a promising approach for decreasing breeding cycle length in forest trees. Assessment of progeny performance and of the prediction accuracy of GS models over generations is therefore a key issue. A reference population of maritime pine (Pinus pinaster) with an estimated effective inbreeding population size (status number) of 25 was first selected with simulated data. This reference population (n = 818) covered three generations (G0, G1 and G2) and was genotyped with 4436 single-nucleotide polymorphism (SNP) markers. We evaluated the effects on prediction accuracy of both the relatedness between the calibration and validation sets and validation on the basis of progeny performance. Pedigree-based (best linear unbiased prediction, ABLUP) and marker-based (genomic BLUP and Bayesian LASSO) models were used to predict breeding values for three different traits: circumference, height and stem straightness. On average, the ABLUP model outperformed genomic prediction models, with a maximum difference in prediction accuracies of 0.12, depending on the trait and the validation method. A mean difference in prediction accuracy of 0.17 was found between validation methods differing in terms of relatedness. Including the progenitors in the calibration set reduced this difference in prediction accuracy to 0.03. When only genotypes from the G0 and G1 generations were used in the calibration set and genotypes from G2 were used in the validation set (progeny validation), prediction accuracies ranged from 0.70 to 0.85. This study suggests that the training of prediction models on parental populations can predict the genetic merit of the progeny with high accuracy: an encouraging result for the implementation of GS in the maritime pine breeding program.

  16. Kalman/Map filtering-aided fast normalized cross correlation-based Wi-Fi fingerprinting location sensing.

    PubMed

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-11-13

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.

  17. Kalman/Map Filtering-Aided Fast Normalized Cross Correlation-Based Wi-Fi Fingerprinting Location Sensing

    PubMed Central

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-01-01

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027

  18. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    PubMed

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  19. Reverse phase HPLC method for detection and quantification of lupin seed γ-conglutin.

    PubMed

    Mane, Sharmilee; Bringans, Scott; Johnson, Stuart; Pareek, Vishnu; Utikar, Ranjeet

    2017-09-15

    A simple, selective and accurate reverse phase HPLC method was developed for detection and quantitation of γ-conglutin from lupin seed extract. A linear gradient of water and acetonitrile containing trifluoroacetic acid (TFA) on a reverse phase column (Agilent Zorbax 300SB C-18), with a flow rate of 0.8ml/min was able to produce a sharp and symmetric peak of γ-conglutin with a retention time at 29.16min. The identity of γ-conglutin in the peak was confirmed by mass spectrometry (MS/MS identification) and sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) analysis. The data obtained from MS/MS analysis was matched against the specified database to obtain the exact match for the protein of interest. The proposed method was validated in terms of specificity, linearity, sensitivity, precision, recovery and accuracy. The analytical parameters revealed that the validated method was capable of selectively performing a good chromatographic separation of γ-conglutin from the lupin seed extract with no interference of the matrix. The detection and quantitation limit of γ-conglutin were found to be 2.68μg/ml and 8.12μg/ml respectively. The accuracy (precision and recovery) analysis of the method was conducted under repeatable conditions on different days. Intra-day and inter-day precision values less than 0.5% and recovery greater than 97% indicated high precision and accuracy of the method for analysis of γ-conglutin. The method validation findings were reproducible and can be successfully applied for routine analysis of γ-conglutin from lupin seed extract. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Validated HPLC determination of 2-[(dimethylamino)methyl]cyclohexanone, an impurity in tramadol, using a precolumn derivatisation reaction with 2,4-dinitrophenylhydrazine.

    PubMed

    Medvedovici, Andrei; Albu, Florin; Farca, Alexandru; David, Victor

    2004-01-27

    A new method for the determination of 2-[(dimethylamino)methyl]cyclohexanone (DAMC) in Tramadol (as active substance or active ingredient in pharmaceutical formulations) is described. The method is based on the derivatisation of 2-[(dimethylamino)methyl]cyclohexanone with 2,4-dinitrophenylhydrazine (2,4-DNPH) in acidic conditions followed by a reversed-phase liquid chromatographic separation with UV detection. The method is simple, selective, quantitative and allows the determination of 2-[(dimethylamino)methyl]cyclohexanone at the low ppm level. The proposed method was validated with respect to selectivity, precision, linearity, accuracy and robustness.

  1. Sensitive and selective determination of methylenedioxylated amphetamines by high-performance liquid chromatography with fluorimetric detection.

    PubMed

    Sadeghipour, F; Veuthey, J L

    1997-11-07

    A rapid, sensitive and selective liquid chromatographic method with fluorimetric detection was developed for the separation and quantification of four methylenedioxylated amphetamines without interference of other drugs of abuse and common substances found in illicit tablets. The method was validated by examining linearity, precision and accuracy as well as detection and quantification limits. Methylenedioxylated amphetamines were quantified in eight tablets from illicit drug seizures and results were quantitatively compared to HPLC-UV analyses. To demonstrate the better sensitivity of the fluorimetric detection, methylenedioxylated amphetamines were analyzed in serum after a liquid-liquid extraction procedure and results were also compared to HPLC-UV analyses.

  2. Spectrophotometric simultaneous determination of Rabeprazole Sodium and Itopride Hydrochloride in capsule dosage form

    NASA Astrophysics Data System (ADS)

    Sabnis, Shweta S.; Dhavale, Nilesh D.; Jadhav, Vijay. Y.; Gandhi, Santosh V.

    2008-03-01

    A new simple, economical, rapid, precise and accurate method for simultaneous determination of rabeprazole sodium and itopride hydrochloride in capsule dosage form has been developed. The method is based on ratio spectra derivative spectrophotometry. The amplitudes in the first derivative of the corresponding ratio spectra at 231 nm (minima) and 260 nm were selected to determine rabeprazole sodium and itopride hydrochloride, respectively. The method was validated with respect to linearity, precision and accuracy.

  3. Spectrophotometric simultaneous determination of rabeprazole sodium and itopride hydrochloride in capsule dosage form.

    PubMed

    Sabnis, Shweta S; Dhavale, Nilesh D; Jadhav, Vijay Y; Gandhi, Santosh V

    2008-03-01

    A new simple, economical, rapid, precise and accurate method for simultaneous determination of rabeprazole sodium and itopride hydrochloride in capsule dosage form has been developed. The method is based on ratio spectra derivative spectrophotometry. The amplitudes in the first derivative of the corresponding ratio spectra at 231nm (minima) and 260nm were selected to determine rabeprazole sodium and itopride hydrochloride, respectively. The method was validated with respect to linearity, precision and accuracy.

  4. Ariadne's Thread: A Robust Software Solution Leading to Automated Absolute and Relative Quantification of SRM Data.

    PubMed

    Nasso, Sara; Goetze, Sandra; Martens, Lennart

    2015-09-04

    Selected reaction monitoring (SRM) MS is a highly selective and sensitive technique to quantify protein abundances in complex biological samples. To enhance the pace of SRM large studies, a validated, robust method to fully automate absolute quantification and to substitute for interactive evaluation would be valuable. To address this demand, we present Ariadne, a Matlab software. To quantify monitored targets, Ariadne exploits metadata imported from the transition lists, and targets can be filtered according to mProphet output. Signal processing and statistical learning approaches are combined to compute peptide quantifications. To robustly estimate absolute abundances, the external calibration curve method is applied, ensuring linearity over the measured dynamic range. Ariadne was benchmarked against mProphet and Skyline by comparing its quantification performance on three different dilution series, featuring either noisy/smooth traces without background or smooth traces with complex background. Results, evaluated as efficiency, linearity, accuracy, and precision of quantification, showed that Ariadne's performance is independent of data smoothness and complex background presence and that Ariadne outperforms mProphet on the noisier data set and improved 2-fold Skyline's accuracy and precision for the lowest abundant dilution with complex background. Remarkably, Ariadne could statistically distinguish from each other all different abundances, discriminating dilutions as low as 0.1 and 0.2 fmol. These results suggest that Ariadne offers reliable and automated analysis of large-scale SRM differential expression studies.

  5. Simultaneous determination of triamcinolone hexacetonide and triamcinolone acetonide in rabbit plasma using a highly sensitive and selective UPLC-MS/MS method.

    PubMed

    Sun, Wei; Ho, Stacy; Fang, Xiaojun Rick; O'Shea, Thomas; Liu, Hanlan

    2018-05-10

    An ultra-high pressure liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method was successfully developed and qualified for the simultaneous determination of triamcinolone hexacetonide (TAH) and triamcinolone acetonide (TAA, the active metabolite of TAH) in rabbit plasma. To prevent the hydrolysis of TAH to TAA ex vivo during sample collection and processing, we evaluated the effectiveness of several esterase inhibitors to stabilize TAH in plasma. Phenylmethanesulfonyl fluoride (PMSF) at 2.0 mM was chosen to stabilize TAH in rabbit plasma. The developed method is highly sensitive with a lower limit of quantitation of 10.0 pg/mL for both TAA and TAH using a 300 μL plasma aliquot. The method demonstrated good linearity, accuracy, precision, sensitivity, selectivity, recovery, matrix effects, dilution integrity, carryover, and stability. Linearity was obtained over the range of 10-2500 pg/mL. Both intra- and inter-run coefficients of variation were less than 9.1% and accuracies across the assay range were all within 100 ± 8.4%. The run time is under 5 minutes. The method was successfully implemented to support a rabbit pharmacokinetic study of TAH and TAA following a single intra-articular administration of TAH (Aristospan ® ). Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  7. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method.

    PubMed

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-02-01

    To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil.

  8. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method

    PubMed Central

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-01-01

    Objective To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. Methods TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Results Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. Conclusions The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil. PMID:25182282

  9. Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.

    PubMed

    Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi

    2013-01-01

    The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.

  10. Firmness prediction in Prunus persica 'Calrico' peaches by visible/short-wave near infrared spectroscopy and acoustic measurements using optimised linear and non-linear chemometric models.

    PubMed

    Lafuente, Victoria; Herrera, Luis J; Pérez, María del Mar; Val, Jesús; Negueruela, Ignacio

    2015-08-15

    In this work, near infrared spectroscopy (NIR) and an acoustic measure (AWETA) (two non-destructive methods) were applied in Prunus persica fruit 'Calrico' (n = 260) to predict Magness-Taylor (MT) firmness. Separate and combined use of these measures was evaluated and compared using partial least squares (PLS) and least squares support vector machine (LS-SVM) regression methods. Also, a mutual-information-based variable selection method, seeking to find the most significant variables to produce optimal accuracy of the regression models, was applied to a joint set of variables (NIR wavelengths and AWETA measure). The newly proposed combined NIR-AWETA model gave good values of the determination coefficient (R(2)) for PLS and LS-SVM methods (0.77 and 0.78, respectively), improving the reliability of MT firmness prediction in comparison with separate NIR and AWETA predictions. The three variables selected by the variable selection method (AWETA measure plus NIR wavelengths 675 and 697 nm) achieved R(2) values 0.76 and 0.77, PLS and LS-SVM. These results indicated that the proposed mutual-information-based variable selection algorithm was a powerful tool for the selection of the most relevant variables. © 2014 Society of Chemical Industry.

  11. The Use of Linear Programming for Prediction.

    ERIC Educational Resources Information Center

    Schnittjer, Carl J.

    The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)

  12. Prediction accuracies for growth and wood attributes of interior spruce in space using genotyping-by-sequencing.

    PubMed

    Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Chen, Charles; Porth, Ilga; El-Kassaby, Yousry A

    2015-05-09

    Genomic selection (GS) in forestry can substantially reduce the length of breeding cycle and increase gain per unit time through early selection and greater selection intensity, particularly for traits of low heritability and late expression. Affordable next-generation sequencing technologies made it possible to genotype large numbers of trees at a reasonable cost. Genotyping-by-sequencing was used to genotype 1,126 Interior spruce trees representing 25 open-pollinated families planted over three sites in British Columbia, Canada. Four imputation algorithms were compared (mean value (MI), singular value decomposition (SVD), expectation maximization (EM), and a newly derived, family-based k-nearest neighbor (kNN-Fam)). Trees were phenotyped for several yield and wood attributes. Single- and multi-site GS prediction models were developed using the Ridge Regression Best Linear Unbiased Predictor (RR-BLUP) and the Generalized Ridge Regression (GRR) to test different assumption about trait architecture. Finally, using PCA, multi-trait GS prediction models were developed. The EM and kNN-Fam imputation methods were superior for 30 and 60% missing data, respectively. The RR-BLUP GS prediction model produced better accuracies than the GRR indicating that the genetic architecture for these traits is complex. GS prediction accuracies for multi-site were high and better than those of single-sites while multi-site predictability produced the lowest accuracies reflecting type-b genetic correlations and deemed unreliable. The incorporation of genomic information in quantitative genetics analyses produced more realistic heritability estimates as half-sib pedigree tended to inflate the additive genetic variance and subsequently both heritability and gain estimates. Principle component scores as representatives of multi-trait GS prediction models produced surprising results where negatively correlated traits could be concurrently selected for using PCA2 and PCA3. The application of GS to open-pollinated family testing, the simplest form of tree improvement evaluation methods, was proven to be effective. Prediction accuracies obtained for all traits greatly support the integration of GS in tree breeding. While the within-site GS prediction accuracies were high, the results clearly indicate that single-site GS models ability to predict other sites are unreliable supporting the utilization of multi-site approach. Principle component scores provided an opportunity for the concurrent selection of traits with different phenotypic optima.

  13. Development and application of a validated HPLC method for the analysis of dissolution samples of levothyroxine sodium drug products.

    PubMed

    Collier, J W; Shah, R B; Bryant, A R; Habib, M J; Khan, M A; Faustino, P J

    2011-02-20

    A rapid, selective, and sensitive gradient HPLC method was developed for the analysis of dissolution samples of levothyroxine sodium tablets. Current USP methodology for levothyroxine (L-T(4)) was not adequate to resolve co-elutants from a variety of levothyroxine drug product formulations. The USP method for analyzing dissolution samples of the drug product has shown significant intra- and inter-day variability. The sources of method variability include chromatographic interferences introduced by the dissolution media and the formulation excipients. In the present work, chromatographic separation of levothyroxine was achieved on an Agilent 1100 Series HPLC with a Waters Nova-pak column (250 mm × 3.9 mm) using a 0.01 M phosphate buffer (pH 3.0)-methanol (55:45, v/v) in a gradient elution mobile phase at a flow rate of 1.0 mL/min and detection UV wavelength of 225 nm. The injection volume was 800 μL and the column temperature was maintained at 28°C. The method was validated according to USP Category I requirements. The validation characteristics included accuracy, precision, specificity, linearity, and analytical range. The standard curve was found to have a linear relationship (r(2)>0.99) over the analytical range of 0.08-0.8 μg/mL. Accuracy ranged from 90 to 110% for low quality control (QC) standards and 95 to 105% for medium and high QC standards. Precision was <2% at all QC levels. The method was found to be accurate, precise, selective, and linear for L-T(4) over the analytical range. The HPLC method was successfully applied to the analysis of dissolution samples of marketed levothyroxine sodium tablets. Published by Elsevier B.V.

  14. Development and application of a validated HPLC method for the analysis of dissolution samples of levothyroxine sodium drug products

    PubMed Central

    Collier, J.W.; Shah, R.B.; Bryant, A.R.; Habib, M.J.; Khan, M.A.; Faustino, P.J.

    2011-01-01

    A rapid, selective, and sensitive gradient HPLC method was developed for the analysis of dissolution samples of levothyroxine sodium tablets. Current USP methodology for levothyroxine (l-T4) was not adequate to resolve co-elutants from a variety of levothyroxine drug product formulations. The USP method for analyzing dissolution samples of the drug product has shown significant intra- and inter-day variability. The sources of method variability include chromatographic interferences introduced by the dissolution media and the formulation excipients. In the present work, chromatographic separation of levothyroxine was achieved on an Agilent 1100 Series HPLC with a Waters Nova-pak column (250mm × 3.9mm) using a 0.01 M phosphate buffer (pH 3.0)–methanol (55:45, v/v) in a gradient elution mobile phase at a flow rate of 1.0 mL/min and detection UV wavelength of 225 nm. The injection volume was 800 µL and the column temperature was maintained at 28 °C. The method was validated according to USP Category I requirements. The validation characteristics included accuracy, precision, specificity, linearity, and analytical range. The standard curve was found to have a linear relationship (r2 > 0.99) over the analytical range of 0.08–0.8 µg/mL. Accuracy ranged from 90 to 110% for low quality control (QC) standards and 95 to 105% for medium and high QC standards. Precision was <2% at all QC levels. The method was found to be accurate, precise, selective, and linear for l-T4 over the analytical range. The HPLC method was successfully applied to the analysis of dissolution samples of marketed levothyroxine sodium tablets. PMID:20947276

  15. Discovering Optimum Method to Extract Depth Information for Nearshore Coastal Waters from SENTINEL-2A - Case Study: Nayband Bay, Iran

    NASA Astrophysics Data System (ADS)

    Kabiri, K.

    2017-09-01

    The capabilities of Sentinel-2A imagery to determine bathymetric information in shallow coastal waters were examined. In this regard, two Sentinel-2A images (acquired on February and March 2016 in calm weather and relatively low turbidity) were selected from Nayband Bay, located in the northern Persian Gulf. In addition, a precise and accurate bathymetric map for the study area were obtained and used for both calibrating the models and validating the results. Traditional linear and ratio transform techniques, as well as a novel integrated method, were employed to determine depth values. All possible combinations of the three bands (Band 2: blue (458-523 nm), Band 3: green (543-578 nm), and Band 4: red (650-680 nm), spatial resolution: 10 m) have been considered (11 options) using the traditional linear and ratio transform techniques, together with 10 model options for the integrated method. The accuracy of each model was assessed by comparing the determined bathymetric information with field measured values. The correlation coefficients (R2), and root mean square errors (RMSE) for validation points were calculated for all models and for two satellite images. When compared with the linear transform method, the method employing ratio transformation with a combination of all three bands yielded more accurate results (R2Mac = 0.795, R2Feb = 0.777, RMSEMac = 1.889 m, and RMSEFeb =2.039 m). Although most of the integrated transform methods (specifically the method including all bands and band ratios) have yielded the highest accuracy, these increments were not significant, hence the ratio transformation has selected as optimum method.

  16. RP-HPLC Method Development and Validation for Determination of Eptifibatide Acetate in Bulk Drug Substance and Pharmaceutical Dosage Forms.

    PubMed

    Bavand Savadkouhi, Maryam; Vahidi, Hossein; Ayatollahi, Abdul Majid; Hooshfar, Shirin; Kobarfard, Farzad

    2017-01-01

    A new, rapid, economical and isocratic reverse phase high performance liquid chromatography (RP-HPLC) method was developed for the determination of eptifibatide acetate, a small synthetic antiplatelet peptide, in bulk drug substance and pharmaceutical dosage forms. The developed method was validated as per of ICH guidelines. The chromatographic separation was achieved isocratically on C18 column (150 x 4.60 mm i.d., 5 µM particle size) at ambient temperature using acetonitrile (ACN), water and trifluoroacetic acid (TFA) as mobile phase at flow rate of 1 mL/min and UV detection at 275 nm. Eptifibatide acetate exhibited linearity over the concentration range of 0.15-2 mg/mL (r 2 =0.997) with limit of detection of 0.15 mg/mL The accuracy of the method was 96.4-103.8%. The intra-day and inter-day precision were between 0.052% and 0.598%, respectively. The present successfully validated method with excellent selectivity, linearity, sensitivity, precision and accuracy was applicable for the assay of eptifibatide acetate in bulk drug substance and pharmaceutical dosage forms.

  17. Anodic Oxidation of Etodolac and its Linear Sweep, Square Wave and Differential Pulse Voltammetric Determination in Pharmaceuticals

    PubMed Central

    Yilmaz, B.; Kaban, S.; Akcay, B. K.

    2015-01-01

    In this study, simple, fast and reliable cyclic voltammetry, linear sweep voltammetry, square wave voltammetry and differential pulse voltammetry methods were developed and validated for determination of etodolac in pharmaceutical preparations. The proposed methods were based on electrochemical oxidation of etodolac at platinum electrode in acetonitrile solution containing 0.1 M lithium perchlorate. The well-defined oxidation peak was observed at 1.03 V. The calibration curves were linear for etodolac at the concentration range of 2.5-50 μg/ml for linear sweep, square wave and differential pulse voltammetry methods, respectively. Intra- and inter-day precision values for etodolac were less than 4.69, and accuracy (relative error) was better than 2.00%. The mean recovery of etodolac was 100.6% for pharmaceutical preparations. No interference was found from three tablet excipients at the selected assay conditions. Developed methods in this study are accurate, precise and can be easily applied to Etol, Tadolak and Etodin tablets as pharmaceutical preparation. PMID:26664057

  18. Automatic classification of small bowel mucosa alterations in celiac disease for confocal laser endomicroscopy

    NASA Astrophysics Data System (ADS)

    Boschetto, Davide; Di Claudio, Gianluca; Mirzaei, Hadis; Leong, Rupert; Grisan, Enrico

    2016-03-01

    Celiac disease (CD) is an immune-mediated enteropathy triggered by exposure to gluten and similar proteins, affecting genetically susceptible persons, increasing their risk of different complications. Small bowels mucosa damage due to CD involves various degrees of endoscopically relevant lesions, which are not easily recognized: their overall sensitivity and positive predictive values are poor even when zoom-endoscopy is used. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to qualitative evaluate mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a method for automatically classifying CLE images into three different classes: normal regions, villous atrophy and crypt hypertrophy. This classification is performed after a features selection process, in which four features are extracted from each image, through the application of homomorphic filtering and border identification through Canny and Sobel operators. Three different classifiers have been tested on a dataset of 67 different images labeled by experts in three classes (normal, VA and CH): linear approach, Naive-Bayes quadratic approach and a standard quadratic analysis, all validated with a ten-fold cross validation. Linear classification achieves 82.09% accuracy (class accuracies: 90.32% for normal villi, 82.35% for VA and 68.42% for CH, sensitivity: 0.68, specificity 1.00), Naive Bayes analysis returns 83.58% accuracy (90.32% for normal villi, 70.59% for VA and 84.21% for CH, sensitivity: 0.84 specificity: 0.92), while the quadratic analysis achieves a final accuracy of 94.03% (96.77% accuracy for normal villi, 94.12% for VA and 89.47% for CH, sensitivity: 0.89, specificity: 0.98).

  19. Simultaneous fitting of genomic-BLUP and Bayes-C components in a genomic prediction model.

    PubMed

    Iheshiulor, Oscar O M; Woolliams, John A; Svendsen, Morten; Solberg, Trygve; Meuwissen, Theo H E

    2017-08-24

    The rapid adoption of genomic selection is due to two key factors: availability of both high-throughput dense genotyping and statistical methods to estimate and predict breeding values. The development of such methods is still ongoing and, so far, there is no consensus on the best approach. Currently, the linear and non-linear methods for genomic prediction (GP) are treated as distinct approaches. The aim of this study was to evaluate the implementation of an iterative method (called GBC) that incorporates aspects of both linear [genomic-best linear unbiased prediction (G-BLUP)] and non-linear (Bayes-C) methods for GP. The iterative nature of GBC makes it less computationally demanding similar to other non-Markov chain Monte Carlo (MCMC) approaches. However, as a Bayesian method, GBC differs from both MCMC- and non-MCMC-based methods by combining some aspects of G-BLUP and Bayes-C methods for GP. Its relative performance was compared to those of G-BLUP and Bayes-C. We used an imputed 50 K single-nucleotide polymorphism (SNP) dataset based on the Illumina Bovine50K BeadChip, which included 48,249 SNPs and 3244 records. Daughter yield deviations for somatic cell count, fat yield, milk yield, and protein yield were used as response variables. GBC was frequently (marginally) superior to G-BLUP and Bayes-C in terms of prediction accuracy and was significantly better than G-BLUP only for fat yield. On average across the four traits, GBC yielded a 0.009 and 0.006 increase in prediction accuracy over G-BLUP and Bayes-C, respectively. Computationally, GBC was very much faster than Bayes-C and similar to G-BLUP. Our results show that incorporating some aspects of G-BLUP and Bayes-C in a single model can improve accuracy of GP over the commonly used method: G-BLUP. Generally, GBC did not statistically perform better than G-BLUP and Bayes-C, probably due to the close relationships between reference and validation individuals. Nevertheless, it is a flexible tool, in the sense, that it simultaneously incorporates some aspects of linear and non-linear models for GP, thereby exploiting family relationships while also accounting for linkage disequilibrium between SNPs and genes with large effects. The application of GBC in GP merits further exploration.

  20. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    PubMed

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  <  0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  <  0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.

  1. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    NASA Astrophysics Data System (ADS)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  2. An SVM-based solution for fault detection in wind turbines.

    PubMed

    Santos, Pedro; Villa, Luisa F; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-03-09

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets.

  3. ADER schemes for scalar non-linear hyperbolic conservation laws with source terms in three-space dimensions

    NASA Astrophysics Data System (ADS)

    Toro, E. F.; Titarev, V. A.

    2005-01-01

    In this paper we develop non-linear ADER schemes for time-dependent scalar linear and non-linear conservation laws in one-, two- and three-space dimensions. Numerical results of schemes of up to fifth order of accuracy in both time and space illustrate that the designed order of accuracy is achieved in all space dimensions for a fixed Courant number and essentially non-oscillatory results are obtained for solutions with discontinuities. We also present preliminary results for two-dimensional non-linear systems.

  4. Ensemble support vector machine classification of dementia using structural MRI and mini-mental state examination.

    PubMed

    Sørensen, Lauge; Nielsen, Mads

    2018-05-15

    The International Challenge for Automated Prediction of MCI from MRI data offered independent, standardized comparison of machine learning algorithms for multi-class classification of normal control (NC), mild cognitive impairment (MCI), converting MCI (cMCI), and Alzheimer's disease (AD) using brain imaging and general cognition. We proposed to use an ensemble of support vector machines (SVMs) that combined bagging without replacement and feature selection. SVM is the most commonly used algorithm in multivariate classification of dementia, and it was therefore valuable to evaluate the potential benefit of ensembling this type of classifier. The ensemble SVM, using either a linear or a radial basis function (RBF) kernel, achieved multi-class classification accuracies of 55.6% and 55.0% in the challenge test set (60 NC, 60 MCI, 60 cMCI, 60 AD), resulting in a third place in the challenge. Similar feature subset sizes were obtained for both kernels, and the most frequently selected MRI features were the volumes of the two hippocampal subregions left presubiculum and right subiculum. Post-challenge analysis revealed that enforcing a minimum number of selected features and increasing the number of ensemble classifiers improved classification accuracy up to 59.1%. The ensemble SVM outperformed single SVM classifications consistently in the challenge test set. Ensemble methods using bagging and feature selection can improve the performance of the commonly applied SVM classifier in dementia classification. This resulted in competitive classification accuracies in the International Challenge for Automated Prediction of MCI from MRI data. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Linearity, Bias, and Precision of Hepatic Proton Density Fat Fraction Measurements by Using MR Imaging: A Meta-Analysis.

    PubMed

    Yokoo, Takeshi; Serai, Suraj D; Pirasteh, Ali; Bashir, Mustafa R; Hamilton, Gavin; Hernando, Diego; Hu, Houchun H; Hetterich, Holger; Kühn, Jens-Peter; Kukuk, Guido M; Loomba, Rohit; Middleton, Michael S; Obuchowski, Nancy A; Song, Ji Soo; Tang, An; Wu, Xinhuai; Reeder, Scott B; Sirlin, Claude B

    2018-02-01

    Purpose To determine the linearity, bias, and precision of hepatic proton density fat fraction (PDFF) measurements by using magnetic resonance (MR) imaging across different field strengths, imager manufacturers, and reconstruction methods. Materials and Methods This meta-analysis was performed in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A systematic literature search identified studies that evaluated the linearity and/or bias of hepatic PDFF measurements by using MR imaging (hereafter, MR imaging-PDFF) against PDFF measurements by using colocalized MR spectroscopy (hereafter, MR spectroscopy-PDFF) or the precision of MR imaging-PDFF. The quality of each study was evaluated by using the Quality Assessment of Studies of Diagnostic Accuracy 2 tool. De-identified original data sets from the selected studies were pooled. Linearity was evaluated by using linear regression between MR imaging-PDFF and MR spectroscopy-PDFF measurements. Bias, defined as the mean difference between MR imaging-PDFF and MR spectroscopy-PDFF measurements, was evaluated by using Bland-Altman analysis. Precision, defined as the agreement between repeated MR imaging-PDFF measurements, was evaluated by using a linear mixed-effects model, with field strength, imager manufacturer, reconstruction method, and region of interest as random effects. Results Twenty-three studies (1679 participants) were selected for linearity and bias analyses and 11 studies (425 participants) were selected for precision analyses. MR imaging-PDFF was linear with MR spectroscopy-PDFF (R 2 = 0.96). Regression slope (0.97; P < .001) and mean Bland-Altman bias (-0.13%; 95% limits of agreement: -3.95%, 3.40%) indicated minimal underestimation by using MR imaging-PDFF. MR imaging-PDFF was precise at the region-of-interest level, with repeatability and reproducibility coefficients of 2.99% and 4.12%, respectively. Field strength, imager manufacturer, and reconstruction method each had minimal effects on reproducibility. Conclusion MR imaging-PDFF has excellent linearity, bias, and precision across different field strengths, imager manufacturers, and reconstruction methods. © RSNA, 2017 Online supplemental material is available for this article. An earlier incorrect version of this article appeared online. This article was corrected on October 2, 2017.

  6. Determination of the neuropharmacological drug nodakenin in rat plasma and brain tissues by liquid chromatography tandem mass spectrometry: Application to pharmacokinetic studies.

    PubMed

    Song, Yingshi; Yan, Huiyu; Xu, Jingbo; Ma, Hongxi

    2017-09-01

    A rapid and sensitive liquid chromatography tandem mass spectrometry detection using selected reaction monitoring in positive ionization mode was developed and validated for the quantification of nodakenin in rat plasma and brain. Pareruptorin A was used as internal standard. A single step liquid-liquid extraction was used for plasma and brain sample preparation. The method was validated with respect to selectivity, precision, accuracy, linearity, limit of quantification, recovery, matrix effect and stability. Lower limit of quantification of nodakenin was 2.0 ng/mL in plasma and brain tissue homogenates. Linear calibration curves were obtained over concentration ranges of 2.0-1000 ng/mL in plasma and brain tissue homogenates for nodakenin. Intra-day and inter-day precisions (relative standard deviation, RSD) were <15% in both biological media. This assay was successfully applied to plasma and brain pharmacokinetic studies of nodakenin in rats after intravenous administration. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Development of a method of clozapine dosage by selective electrode to the iodides.

    PubMed

    Teyeb, Hassen; Douki, Wahiba; Najjar, Mohamed Fadhel

    2012-07-01

    Clozapine (Leponex(®)), the main neuroleptic indicated in the treatment of resistant schizophrenia, requires therapeutic monitoring because of its side effects and the individual variability in metabolism. In addition, several cases of intoxication by this drug were described in the literature. In this work, we studied the indirect dosage of clozapine by selective electrode to the iodides for the optimization of an analytical protocol allowing therapeutic monitoring and the diagnosis of intoxication and/or overdose. Our results showed that the developed method is linear between 0.05 and 12.5 µg/mL (r = 0.980), with a limit of detection of 0.645.10(-3) µg/mL. It presents good precision (coefficient of variation less than 4%) and accuracy (coefficient less than 10%) for all the studied concentrations. With a domain of linearity covering a wide margin of concentrations, this method can be applicable to the dosage of clozapine in tablets and in different biological matrices, such as plasma, urines, and postmortem samples.

  8. Selection of finite-element mesh parameters in modeling the growth of hydraulic fracturing cracks

    NASA Astrophysics Data System (ADS)

    Kurguzov, V. D.

    2016-12-01

    The effect of the mesh geometry on the accuracy of solutions obtained by the finite-element method for problems of linear fracture mechanics is investigated. The guidelines have been formulated for constructing an optimum mesh for several routine problems involving elements with linear and quadratic approximation of displacements. The accuracy of finite-element solutions is estimated based on the degree of the difference between the calculated stress-intensity factor (SIF) and its value obtained analytically. In problems of hydrofracturing of oil-bearing formation, the pump-in pressure of injected water produces a distributed load on crack flanks as opposed to standard fracture mechanics problems that have analytical solutions, where a load is applied to the external boundaries of the computational region and the cracks themselves are kept free from stresses. Some model pressure profiles, as well as pressure profiles taken from real hydrodynamic computations, have been considered. Computer models of cracks with allowance for the pre-stressed state, fracture toughness, and elastic properties of materials are developed in the MSC.Marc 2012 finite-element analysis software. The Irwin force criterion is used as a criterion of brittle fracture and the SIFs are computed using the Cherepanov-Rice invariant J-integral. The process of crack propagation in a linearly elastic isotropic body is described in terms of the elastic energy release rate G and modeled using the VCCT (Virtual Crack Closure Technique) approach. It has been found that the solution accuracy is sensitive to the mesh configuration. Several parameters that are decisive in constructing effective finite-element meshes, namely, the minimum element size, the distance between mesh nodes in the vicinity of a crack tip, and the ratio of the height of an element to its length, have been established. It has been shown that a mesh that consists of only small elements does not improve the accuracy of the solution.

  9. Linear and Logarithmic Speed-Accuracy Trade-Offs in Reciprocal Aiming Result from Task-Specific Parameterization of an Invariant Underlying Dynamics

    ERIC Educational Resources Information Center

    Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.

    2009-01-01

    The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…

  10. Exploring Deep Learning and Sparse Matrix Format Selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Y.; Liao, C.; Shen, X.

    We proposed to explore the use of Deep Neural Networks (DNN) for addressing the longstanding barriers. The recent rapid progress of DNN technology has created a large impact in many fields, which has significantly improved the prediction accuracy over traditional machine learning techniques in image classifications, speech recognitions, machine translations, and so on. To some degree, these tasks resemble the decision makings in many HPC tasks, including the aforementioned format selection for SpMV and linear solver selection. For instance, sparse matrix format selection is akin to image classification—such as, to tell whether an image contains a dog or a cat;more » in both problems, the right decisions are primarily determined by the spatial patterns of the elements in an input. For image classification, the patterns are of pixels, and for sparse matrix format selection, they are of non-zero elements. DNN could be naturally applied if we regard a sparse matrix as an image and the format selection or solver selection as classification problems.« less

  11. Auditory evoked potentials in patients with major depressive disorder measured by Emotiv system.

    PubMed

    Wang, Dongcui; Mo, Fongming; Zhang, Yangde; Yang, Chao; Liu, Jun; Chen, Zhencheng; Zhao, Jinfeng

    2015-01-01

    In a previous study (unpublished), Emotiv headset was validated for capturing event-related potentials (ERPs) from normal subjects. In the present follow-up study, the signal quality of Emotiv headset was tested by the accuracy rate of discriminating Major Depressive Disorder (MDD) patients from the normal subjects. ERPs of 22 MDD patients and 15 normal subjects were induced by an auditory oddball task and the amplitude of N1, N2 and P3 of ERP components were specifically analyzed. The features of ERPs were statistically investigated. It is found that Emotiv headset is capable of discriminating the abnormal N1, N2 and P3 components in MDD patients. Relief-F algorithm was applied to all features for feature selection. The selected features were then input to a linear discriminant analysis (LDA) classifier with leave-one-out cross-validation to characterize the ERP features of MDD. 127 possible combinations out of the selected 7 ERP features were classified using LDA. The best classification accuracy was achieved to be 89.66%. These results suggest that MDD patients are identifiable from normal subjects by ERPs measured by Emotiv headset.

  12. Improving the sensitivity and specificity of a bioanalytical assay for the measurement of certolizumab pegol.

    PubMed

    Smeraglia, John; Silva, John-Paul; Jones, Kieran

    2017-08-01

    In order to evaluate placental transfer of certolizumab pegol (CZP), a more sensitive and selective bioanalytical assay was required to accurately measure low CZP concentrations in infant and umbilical cord blood. Results & methodology: A new electrochemiluminescence immunoassay was developed to measure CZP levels in human plasma. Validation experiments demonstrated improved selectivity (no matrix interference observed) and a detection range of 0.032-5.0 μg/ml. Accuracy and precision met acceptance criteria (mean total error ≤20.8%). Dilution linearity and sample stability were acceptable and sufficient to support the method. The electrochemiluminescence immunoassay was validated for measuring low CZP concentrations in human plasma. The method demonstrated a more than tenfold increase in sensitivity compared with previous assays, and improved selectivity for intact CZP.

  13. Investigation of ODE integrators using interactive graphics. [Ordinary Differential Equations

    NASA Technical Reports Server (NTRS)

    Brown, R. L.

    1978-01-01

    Two FORTRAN programs using an interactive graphic terminal to generate accuracy and stability plots for given multistep ordinary differential equation (ODE) integrators are described. The first treats the fixed stepsize linear case with complex variable solutions, and generates plots to show accuracy and error response to step driving function of a numerical solution, as well as the linear stability region. The second generates an analog to the stability region for classes of non-linear ODE's as well as accuracy plots. Both systems can compute method coefficients from a simple specification of the method. Example plots are given.

  14. A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows

    NASA Technical Reports Server (NTRS)

    Montgomery, Matthew D.; Verdon, Joseph M.

    1997-01-01

    A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic responses of axial-flow turbo-machinery blading.The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. A numerical model for linearized inviscid unsteady flows, which couples a near-field, implicit, wave-split, finite volume analysis to a far-field eigenanalysis, is also described. The linearized aerodynamic and numerical models have been implemented into a three-dimensional linearized unsteady flow code, called LINFLUX. This code has been applied to selected, benchmark, unsteady, subsonic flows to establish its accuracy and to demonstrate its current capabilities. The unsteady flows considered, have been chosen to allow convenient comparisons between the LINFLUX results and those of well-known, two-dimensional, unsteady flow codes. Detailed numerical results for a helical fan and a three-dimensional version of the 10th Standard Cascade indicate that important progress has been made towards the development of a reliable and useful, three-dimensional, prediction capability that can be used in aeroelastic and aeroacoustic design studies.

  15. Relationship between accuracy and complexity when learning underarm precision throwing.

    PubMed

    Valle, Maria Stella; Lombardo, Luciano; Cioni, Matteo; Casabona, Antonino

    2018-06-12

    Learning precision ball throwing was mostly studied to explore the early rapid improvement of accuracy, with poor attention on possible adaptive processes occurring later when the rate of improvement is reduced. Here, we tried to demonstrate that the strategy to select angle, speed and height at ball release can be managed during the learning periods following the performance stabilization. To this aim, we used a multivariate linear model with angle, speed and height as predictors of changes in accuracy. Participants performed underarm throws of a tennis ball to hit a target on the floor, 3.42 m away. Two training sessions (S1, S2) and one retention test were executed. Performance accuracy increased over the S1 and stabilized during the S2, with a rate of changes along the throwing axis slower than along the orthogonal axis. However, both the axes contributed to the performance changes over the learning and consolidation time. A stable relationship between the accuracy and the release parameters was observed only during S2, with a good fraction of the performance variance explained by the combination of speed and height. All the variations were maintained during the retention test. Overall, accuracy improvements and reduction in throwing complexity at the ball release followed separate timing over the course of learning and consolidation.

  16. Using artificial intelligence to predict the risk for posterior capsule opacification after phacoemulsification.

    PubMed

    Mohammadi, Seyed-Farzad; Sabbaghi, Mostafa; Z-Mehrjardi, Hadi; Hashemi, Hassan; Alizadeh, Somayeh; Majdi, Mercede; Taee, Farough

    2012-03-01

    To apply artificial intelligence models to predict the occurrence of posterior capsule opacification (PCO) after phacoemulsification. Farabi Eye Hospital, Tehran, Iran. Clinical-based cross-sectional study. The posterior capsule status of eyes operated on for age-related cataract and the need for laser capsulotomy were determined. After a literature review, data polishing, and expert consultation, 10 input variables were selected. The QUEST algorithm was used to develop a decision tree. Three back-propagation artificial neural networks were constructed with 4, 20, and 40 neurons in 2 hidden layers and trained with the same transfer functions (log-sigmoid and linear transfer) and training protocol with randomly selected eyes. They were then tested on the remaining eyes and the networks compared for their performance. Performance indices were used to compare resultant models with the results of logistic regression analysis. The models were trained using 282 randomly selected eyes and then tested using 70 eyes. Laser capsulotomy for clinically significant PCO was indicated or had been performed 2 years postoperatively in 40 eyes. A sample decision tree was produced with accuracy of 50% (likelihood ratio 0.8). The best artificial neural network, which showed 87% accuracy and a positive likelihood ratio of 8, was achieved with 40 neurons. The area under the receiver-operating-characteristic curve was 0.71. In comparison, logistic regression reached accuracy of 80%; however, the likelihood ratio was not measurable because the sensitivity was zero. A prototype artificial neural network was developed that predicted posterior capsule status (requiring capsulotomy) with reasonable accuracy. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  17. GAPIT: genome association and prediction integrated tool.

    PubMed

    Lipka, Alexander E; Tian, Feng; Wang, Qishan; Peiffer, Jason; Li, Meng; Bradbury, Peter J; Gore, Michael A; Buckler, Edward S; Zhang, Zhiwu

    2012-09-15

    Software programs that conduct genome-wide association studies and genomic prediction and selection need to use methodologies that maximize statistical power, provide high prediction accuracy and run in a computationally efficient manner. We developed an R package called Genome Association and Prediction Integrated Tool (GAPIT) that implements advanced statistical methods including the compressed mixed linear model (CMLM) and CMLM-based genomic prediction and selection. The GAPIT package can handle large datasets in excess of 10 000 individuals and 1 million single-nucleotide polymorphisms with minimal computational time, while providing user-friendly access and concise tables and graphs to interpret results. http://www.maizegenetics.net/GAPIT. zhiwu.zhang@cornell.edu Supplementary data are available at Bioinformatics online.

  18. Ruthenium oxide ion selective thin-film electrodes for engine oil acidity monitoring

    NASA Astrophysics Data System (ADS)

    Maurya, D. K.; Sardarinejad, A.; Alameh, K.

    2015-06-01

    We demonstrate the concept of a low-cost, rugged, miniaturized ion selective electrode (ISE) comprising a thin film RuO2 on platinum sensing electrode deposited using RF magnetron sputtered in conjunction with an integrated Ag/AgCl and Ag reference electrodes for engine oil acidity monitoring. Model oil samples are produced by adding nitric acid into fresh fully synthetic engine oil and used for sensor evaluation. Experimental results show a linear potential-versus-acid-concentration response for nitric acid concentration between 0 (fresh oil) to 400 ppm, which demonstrate the accuracy of the RuO2 sensor in real-time operation, making it attractive for use in cars and industrial engines.

  19. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  20. Determination of d-limonene in adipose tissue by gas chromatography-mass spectrometry

    PubMed Central

    Miller, Jessica A.; Hakim, Iman A.; Thomson, Cynthia; Thompson, Patricia; Chow, H-H. Sherry

    2008-01-01

    We developed a novel method for analyzing d-limonene levels in adipose tissue. Fat samples were subjected to saponification followed by solvent extraction. d-Limonene in the sample extract was analyzed using gas chromatography-mass spectrometry (GC-MS) with selected ion monitoring. Linear calibration curves were established over the mass range of 79.0-2,529 ng d-limonene per 0.1 grams of adipose tissue. Satisfactory within day precision (RSD 6.7 to 9.6%) and accuracy (% difference of −2.7 to 3.8%) and between day precision (RSD 6.0 to 10.7%) and accuracy (% difference of 1.8 to 2.6%) were achieved. The assay was successfully applied to human fat biopsy samples from a d-limonene feeding trial. PMID:18571481

  1. Accuracy assessment of linear spectral mixture model due to terrain undulation

    NASA Astrophysics Data System (ADS)

    Wang, Tianxing; Chen, Songlin; Ma, Ya

    2008-12-01

    Mixture spectra are common in remote sensing due to the limitations of spatial resolution and the heterogeneity of land surface. During the past 30 years, a lot of subpixel model have developed to investigate the information within mixture pixels. Linear spectral mixture model (LSMM) is a simper and more general subpixel model. LSMM also known as spectral mixture analysis is a widely used procedure to determine the proportion of endmembers (constituent materials) within a pixel based on the endmembers' spectral characteristics. The unmixing accuracy of LSMM is restricted by variety of factors, but now the research about LSMM is mostly focused on appraisement of nonlinear effect relating to itself and techniques used to select endmembers, unfortunately, the environment conditions of study area which could sway the unmixing-accuracy, such as atmospheric scatting and terrain undulation, are not studied. This paper probes emphatically into the accuracy uncertainty of LSMM resulting from the terrain undulation. ASTER dataset was chosen and the C terrain correction algorithm was applied to it. Based on this, fractional abundances for different cover types were extracted from both pre- and post-C terrain illumination corrected ASTER using LSMM. Simultaneously, the regression analyses and the IKONOS image were introduced to assess the unmixing accuracy. Results showed that terrain undulation could dramatically constrain the application of LSMM in mountain area. Specifically, for vegetation abundances, a improved unmixing accuracy of 17.6% (regression against to NDVI) and 18.6% (regression against to MVI) for R2 was achieved respectively by removing terrain undulation. Anyway, this study indicated in a quantitative way that effective removal or minimization of terrain illumination effects was essential for applying LSMM. This paper could also provide a new instance for LSMM applications in mountainous areas. In addition, the methods employed in this study could be effectively used to evaluate different algorithms of terrain undulation correction for further study.

  2. Factors affecting GEBV accuracy with single-step Bayesian models.

    PubMed

    Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng

    2018-01-01

    A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.

  3. Performance comparison of two efficient genomic selection methods (gsbay & MixP) applied in aquacultural organisms

    NASA Astrophysics Data System (ADS)

    Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin

    2017-02-01

    Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.

  4. Evaluation of the accuracy of linear measurements on multi-slice and cone beam computed tomography scans to detect the mandibular canal during bilateral sagittal split osteotomy of the mandible.

    PubMed

    Freire-Maia, B; Machado, V deC; Valerio, C S; Custódio, A L N; Manzi, F R; Junqueira, J L C

    2017-03-01

    The aim of this study was to compare the accuracy of linear measurements of the distance between the mandibular cortical bone and the mandibular canal using 64-detector multi-slice computed tomography (MSCT) and cone beam computed tomography (CBCT). It was sought to evaluate the reliability of these examinations in detecting the mandibular canal for use in bilateral sagittal split osteotomy (BSSO) planning. Eight dry human mandibles were studied. Three sites, corresponding to the lingula, the angle, and the body of the mandible, were selected. After the CT scans had been obtained, the mandibles were sectioned and the bone segments measured to obtain the actual measurements. On analysis, no statistically significant difference was found between the measurements obtained through MSCT and CBCT, or when comparing the measurements from these scans with the actual measurements. It is concluded that the images obtained by CT scan, both 64-detector multi-slice and cone beam, can be used to obtain accurate linear measurements to locate the mandibular canal for preoperative planning of BSSO. The ability to correctly locate the mandibular canal during BSSO will reduce the occurrence of neurosensory disturbances in the postoperative period. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. A high-accuracy optical linear algebra processor for finite element applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  6. Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method

    NASA Astrophysics Data System (ADS)

    Shamsoddini, A.; Aboodi, M. R.; Karami, J.

    2017-09-01

    Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  7. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  8. Cuff-less blood pressure measurement using pulse arrival time and a Kalman filter

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Chen, Xianxiang; Fang, Zhen; Xue, Yongjiao; Zhan, Qingyuan; Yang, Ting; Xia, Shanhong

    2017-02-01

    The present study designs an algorithm to increase the accuracy of continuous blood pressure (BP) estimation. Pulse arrival time (PAT) has been widely used for continuous BP estimation. However, because of motion artifact and physiological activities, PAT-based methods are often troubled with low BP estimation accuracy. This paper used a signal quality modified Kalman filter to track blood pressure changes. A Kalman filter guarantees that BP estimation value is optimal in the sense of minimizing the mean square error. We propose a joint signal quality indice to adjust the measurement noise covariance, pushing the Kalman filter to weigh more heavily on measurements from cleaner data. Twenty 2 h physiological data segments selected from the MIMIC II database were used to evaluate the performance. Compared with straightforward use of the PAT-based linear regression model, the proposed model achieved higher measurement accuracy. Due to low computation complexity, the proposed algorithm can be easily transplanted into wearable sensor devices.

  9. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  10. Ionization enhancement in atmospheric pressure chemical ionization and suppression in electrospray ionization between target drugs and stable-isotope-labeled internal standards in quantitative liquid chromatography/tandem mass spectrometry.

    PubMed

    Liang, H R; Foltz, R L; Meng, M; Bennett, P

    2003-01-01

    The phenomena of ionization suppression in electrospray ionization (ESI) and enhancement in atmospheric pressure chemical ionization (APCI) were investigated in selected-ion monitoring and selected-reaction monitoring modes for nine drugs and their corresponding stable-isotope-labeled internal standards (IS). The results showed that all investigated target drugs and their co-eluting isotope-labeled IS suppress each other's ionization responses in ESI. The factors affecting the extent of suppression in ESI were investigated, including structures and concentrations of drugs, matrix effects, and flow rate. In contrast to the ESI results, APCI caused seven of the nine investigated target drugs and their co-eluting isotope-labeled IS to enhance each other's ionization responses. The mutual ionization suppression or enhancement between drugs and their isotope-labeled IS could possibly influence assay sensitivity, reproducibility, accuracy and linearity in quantitative liquid chromatography/mass spectrometry (LC/MS) and liquid chromatography/tandem mass spectrometry (LC/MS/MS). However, calibration curves were linear if an appropriate IS concentration was selected for a desired calibration range to keep the response factors constant. Copyright 2003 John Wiley & Sons, Ltd.

  11. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology

    PubMed Central

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease. PMID:27977767

  12. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.

    PubMed

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease.

  13. Predicting temperate forest stand types using only structural profiles from discrete return airborne lidar

    NASA Astrophysics Data System (ADS)

    Fedrigo, Melissa; Newnham, Glenn J.; Coops, Nicholas C.; Culvenor, Darius S.; Bolton, Douglas K.; Nitschke, Craig R.

    2018-02-01

    Light detection and ranging (lidar) data have been increasingly used for forest classification due to its ability to penetrate the forest canopy and provide detail about the structure of the lower strata. In this study we demonstrate forest classification approaches using airborne lidar data as inputs to random forest and linear unmixing classification algorithms. Our results demonstrated that both random forest and linear unmixing models identified a distribution of rainforest and eucalypt stands that was comparable to existing ecological vegetation class (EVC) maps based primarily on manual interpretation of high resolution aerial imagery. Rainforest stands were also identified in the region that have not previously been identified in the EVC maps. The transition between stand types was better characterised by the random forest modelling approach. In contrast, the linear unmixing model placed greater emphasis on field plots selected as endmembers which may not have captured the variability in stand structure within a single stand type. The random forest model had the highest overall accuracy (84%) and Cohen's kappa coefficient (0.62). However, the classification accuracy was only marginally better than linear unmixing. The random forest model was applied to a region in the Central Highlands of south-eastern Australia to produce maps of stand type probability, including areas of transition (the 'ecotone') between rainforest and eucalypt forest. The resulting map provided a detailed delineation of forest classes, which specifically recognised the coalescing of stand types at the landscape scale. This represents a key step towards mapping the structural and spatial complexity of these ecosystems, which is important for both their management and conservation.

  14. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    NASA Astrophysics Data System (ADS)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  15. A comparative study of additive and subtractive manufacturing for dental restorations.

    PubMed

    Bae, Eun-Jeong; Jeong, Il-Do; Kim, Woong-Chul; Kim, Ji-Hwan

    2017-08-01

    Digital systems have recently found widespread application in the fabrication of dental restorations. For the clinical assessment of dental restorations fabricated digitally, it is necessary to evaluate their accuracy. However, studies of the accuracy of inlay restorations fabricated with additive manufacturing are lacking. The purpose of this in vitro study was to evaluate and compare the accuracy of inlay restorations fabricated by using recently introduced additive manufacturing with the accuracy of subtractive methods. The inlay (distal occlusal cavity) shape was fabricated using 3-dimensional image (reference data) software. Specimens were fabricated using 4 different methods (each n=10, total N=40), including 2 additive manufacturing methods, stereolithography apparatus and selective laser sintering; and 2 subtractive methods, wax and zirconia milling. Fabricated specimens were scanned using a dental scanner and then compared by overlapping reference data. The results were statistically analyzed using a 1-way analysis of variance (α=.05). Additionally, the surface morphology of 1 randomly (the first of each specimen) selected specimen from each group was evaluated using a digital microscope. The results of the overlap analysis of the dental restorations indicated that the root mean square (RMS) deviation observed in the restorations fabricated using the additive manufacturing methods were significantly different from those fabricated using the subtractive methods (P<.05). However, no significant differences were found between restorations fabricated using stereolithography apparatus and selective laser sintering, the additive manufacturing methods (P=.466). Similarly, no significant differences were found between wax and zirconia, the subtractive methods (P=.986). The observed RMS values were 106 μm for stereolithography apparatus, 113 μm for selective laser sintering, 116 μm for wax, and 119 μm for zirconia. Microscopic evaluation of the surface revealed a fine linear gap between the layers of restorations fabricated using stereolithography apparatus and a grooved hole with inconsistent weak scratches when fabricated using selective laser sintering. In the wax and zirconia restorations, possible traces of milling bur passes were observed. The results indicate that the accuracy of dental restorations fabricated using the additive manufacturing methods is higher than that of subtractive methods. Therefore, additive manufacturing methods are a viable alternative to subtractive methods. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  16. A neural network approach to cloud classification

    NASA Technical Reports Server (NTRS)

    Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.

    1990-01-01

    It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.

  17. Evaluation of the performance of the OneTouch Select Plus blood glucose test system against ISO 15197:2013.

    PubMed

    Setford, Steven; Smith, Antony; McColl, David; Grady, Mike; Koria, Krisna; Cameron, Hilary

    2015-01-01

    Assess laboratory and in-clinic performance of the OneTouch Select(®) Plus test system against ISO 15197:2013 standard for measurement of blood glucose. System performance assessed in laboratory against key patient, environmental and pharmacologic factors. User performance was assessed in clinic by system-naïve lay-users. Healthcare professionals assessed system accuracy on diabetes subjects in clinic. The system demonstrated high levels of performance, meeting ISO 15197:2013 requirements in laboratory testing (precision, linearity, hematocrit, temperature, humidity and altitude). System performance was tested against 28 interferents, with an adverse interfering effect only being recorded for pralidoxime iodide. Clinic user performance results fulfilled ISO 15197:2013 accuracy criteria. Subjects agreed that the color range indicator clearly showed if they were low, in-range or high and helped them better understand glucose results. The system evaluated is accurate and meets all ISO 15197:2013 requirements as per the tests described. The color range indicator helped subjects understand glucose results and supports patients in following healthcare professional recommendations on glucose targets.

  18. A mechatronics platform to study prosthetic hand control using EMG signals.

    PubMed

    Geethanjali, P

    2016-09-01

    In this paper, a low-cost mechatronics platform for the design and development of robotic hands as well as a surface electromyogram (EMG) pattern recognition system is proposed. This paper also explores various EMG classification techniques using a low-cost electronics system in prosthetic hand applications. The proposed platform involves the development of a four channel EMG signal acquisition system; pattern recognition of acquired EMG signals; and development of a digital controller for a robotic hand. Four-channel surface EMG signals, acquired from ten healthy subjects for six different movements of the hand, were used to analyse pattern recognition in prosthetic hand control. Various time domain features were extracted and grouped into five ensembles to compare the influence of features in feature-selective classifiers (SLR) with widely considered non-feature-selective classifiers, such as neural networks (NN), linear discriminant analysis (LDA) and support vector machines (SVM) applied with different kernels. The results divulged that the average classification accuracy of the SVM, with a linear kernel function, outperforms other classifiers with feature ensembles, Hudgin's feature set and auto regression (AR) coefficients. However, the slight improvement in classification accuracy of SVM incurs more processing time and memory space in the low-level controller. The Kruskal-Wallis (KW) test also shows that there is no significant difference in the classification performance of SLR with Hudgin's feature set to that of SVM with Hudgin's features along with AR coefficients. In addition, the KW test shows that SLR was found to be better in respect to computation time and memory space, which is vital in a low-level controller. Similar to SVM, with a linear kernel function, other non-feature selective LDA and NN classifiers also show a slight improvement in performance using twice the features but with the drawback of increased memory space requirement and time. This prototype facilitated the study of various issues of pattern recognition and identified an efficient classifier, along with a feature ensemble, in the implementation of EMG controlled prosthetic hands in a laboratory setting at low-cost. This platform may help to motivate and facilitate prosthetic hand research in developing countries.

  19. Sea Ice Detection Based on an Improved Similarity Measurement Method Using Hyperspectral Data.

    PubMed

    Han, Yanling; Li, Jue; Zhang, Yun; Hong, Zhonghua; Wang, Jing

    2017-05-15

    Hyperspectral remote sensing technology can acquire nearly continuous spectrum information and rich sea ice image information, thus providing an important means of sea ice detection. However, the correlation and redundancy among hyperspectral bands reduce the accuracy of traditional sea ice detection methods. Based on the spectral characteristics of sea ice, this study presents an improved similarity measurement method based on linear prediction (ISMLP) to detect sea ice. First, the first original band with a large amount of information is determined based on mutual information theory. Subsequently, a second original band with the least similarity is chosen by the spectral correlation measuring method. Finally, subsequent bands are selected through the linear prediction method, and a support vector machine classifier model is applied to classify sea ice. In experiments performed on images of Baffin Bay and Bohai Bay, comparative analyses were conducted to compare the proposed method and traditional sea ice detection methods. Our proposed ISMLP method achieved the highest classification accuracies (91.18% and 94.22%) in both experiments. From these results the ISMLP method exhibits better performance overall than other methods and can be effectively applied to hyperspectral sea ice detection.

  20. Mamdani-Fuzzy Modeling Approach for Quality Prediction of Non-Linear Laser Lathing Process

    NASA Astrophysics Data System (ADS)

    Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.

    2018-03-01

    Lathing is a process to fashioning stock materials into desired cylindrical shapes which usually performed by traditional lathe machine. But, the recent rapid advancements in engineering materials and precision demand gives a great challenge to the traditional method. The main drawback of conventional lathe is its mechanical contact which brings to the undesirable tool wear, heat affected zone, finishing, and dimensional accuracy especially taper quality in machining of stock with high length to diameter ratio. Therefore, a novel approach has been devised to investigate in transforming a 2D flatbed CO2 laser cutting machine into 3D laser lathing capability as an alternative solution. Three significant design parameters were selected for this experiment, namely cutting speed, spinning speed, and depth of cut. Total of 24 experiments were performed with eight (8) sequential runs where they were then replicated three (3) times. The experimental results were then used to establish Mamdani - Fuzzy predictive model where it yields the accuracy of more than 95%. Thus, the proposed Mamdani - Fuzzy modelling approach is found very much suitable and practical for quality prediction of non-linear laser lathing process for cylindrical stocks of 10mm diameter.

  1. Sea Ice Detection Based on an Improved Similarity Measurement Method Using Hyperspectral Data

    PubMed Central

    Han, Yanling; Li, Jue; Zhang, Yun; Hong, Zhonghua; Wang, Jing

    2017-01-01

    Hyperspectral remote sensing technology can acquire nearly continuous spectrum information and rich sea ice image information, thus providing an important means of sea ice detection. However, the correlation and redundancy among hyperspectral bands reduce the accuracy of traditional sea ice detection methods. Based on the spectral characteristics of sea ice, this study presents an improved similarity measurement method based on linear prediction (ISMLP) to detect sea ice. First, the first original band with a large amount of information is determined based on mutual information theory. Subsequently, a second original band with the least similarity is chosen by the spectral correlation measuring method. Finally, subsequent bands are selected through the linear prediction method, and a support vector machine classifier model is applied to classify sea ice. In experiments performed on images of Baffin Bay and Bohai Bay, comparative analyses were conducted to compare the proposed method and traditional sea ice detection methods. Our proposed ISMLP method achieved the highest classification accuracies (91.18% and 94.22%) in both experiments. From these results the ISMLP method exhibits better performance overall than other methods and can be effectively applied to hyperspectral sea ice detection. PMID:28505135

  2. Multivariate predictors of music perception and appraisal by adult cochlear implant users.

    PubMed

    Gfeller, Kate; Oleson, Jacob; Knutson, John F; Breheny, Patrick; Driscoll, Virginia; Olszewski, Carol

    2008-02-01

    The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music.

  3. The employment of Support Vector Machine to classify high and low performance archers based on bio-physiological variables

    NASA Astrophysics Data System (ADS)

    Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Amirul Abdullah, Muhammad; Hasnun Arif Hassan, Mohd; Khalil, Zubair

    2018-04-01

    The present study employs a machine learning algorithm namely support vector machine (SVM) to classify high and low potential archers from a collection of bio-physiological variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. The bio-physiological variables namely resting heart rate, resting respiratory rate, resting diastolic blood pressure, resting systolic blood pressure, as well as calories intake, were measured prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models i.e. linear, quadratic and cubic kernel functions, were trained on the aforementioned variables. The k-means clustered the archers into high (HPA) and low potential archers (LPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy with a classification accuracy of 94% in comparison the other tested models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected bio-physiological variables examined.

  4. RP-HPLC Method Development and Validation for Determination of Eptifibatide Acetate in Bulk Drug Substance and Pharmaceutical Dosage Forms

    PubMed Central

    Bavand Savadkouhi, Maryam; Vahidi, Hossein; Ayatollahi, Abdul Majid; Hooshfar, Shirin; Kobarfard, Farzad

    2017-01-01

    A new, rapid, economical and isocratic reverse phase high performance liquid chromatography (RP-HPLC) method was developed for the determination of eptifibatide acetate, a small synthetic antiplatelet peptide, in bulk drug substance and pharmaceutical dosage forms. The developed method was validated as per of ICH guidelines. The chromatographic separation was achieved isocratically on C18 column (150 x 4.60 mm i.d., 5 µM particle size) at ambient temperature using acetonitrile (ACN), water and trifluoroacetic acid (TFA) as mobile phase at flow rate of 1 mL/min and UV detection at 275 nm. Eptifibatide acetate exhibited linearity over the concentration range of 0.15-2 mg/mL (r2=0.997) with limit of detection of 0.15 mg/mL The accuracy of the method was 96.4-103.8%. The intra-day and inter-day precision were between 0.052% and 0.598%, respectively. The present successfully validated method with excellent selectivity, linearity, sensitivity, precision and accuracy was applicable for the assay of eptifibatide acetate in bulk drug substance and pharmaceutical dosage forms. PMID:28979304

  5. Toward efficient biomechanical-based deformable image registration of lungs for image-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Al-Mayah, Adil; Moseley, Joanne; Velec, Mike; Brock, Kristy

    2011-08-01

    Both accuracy and efficiency are critical for the implementation of biomechanical model-based deformable registration in clinical practice. The focus of this investigation is to evaluate the potential of improving the efficiency of the deformable image registration of the human lungs without loss of accuracy. Three-dimensional finite element models have been developed using image data of 14 lung cancer patients. Each model consists of two lungs, tumor and external body. Sliding of the lungs inside the chest cavity is modeled using a frictionless surface-based contact model. The effect of the type of element, finite deformation and elasticity on the accuracy and computing time is investigated. Linear and quadrilateral tetrahedral elements are used with linear and nonlinear geometric analysis. Two types of material properties are applied namely: elastic and hyperelastic. The accuracy of each of the four models is examined using a number of anatomical landmarks representing the vessels bifurcation points distributed across the lungs. The registration error is not significantly affected by the element type or linearity of analysis, with an average vector error of around 2.8 mm. The displacement differences between linear and nonlinear analysis methods are calculated for all lungs nodes and a maximum value of 3.6 mm is found in one of the nodes near the entrance of the bronchial tree into the lungs. The 95 percentile of displacement difference ranges between 0.4 and 0.8 mm. However, the time required for the analysis is reduced from 95 min in the quadratic elements nonlinear geometry model to 3.4 min in the linear element linear geometry model. Therefore using linear tetrahedral elements with linear elastic materials and linear geometry is preferable for modeling the breathing motion of lungs for image-guided radiotherapy applications.

  6. An SVM-Based Solution for Fault Detection in Wind Turbines

    PubMed Central

    Santos, Pedro; Villa, Luisa F.; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-01-01

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets. PMID:25760051

  7. Genomic selection for slaughter age in pigs using the Cox frailty model.

    PubMed

    Santos, V S; Martins Filho, S; Resende, M D V; Azevedo, C F; Lopes, P S; Guimarães, S E F; Glória, L S; Silva, F F

    2015-10-19

    The aim of this study was to compare genomic selection methodologies using a linear mixed model and the Cox survival model. We used data from an F2 population of pigs, in which the response variable was the time in days from birth to the culling of the animal and the covariates were 238 markers [237 single nucleotide polymorphism (SNP) plus the halothane gene]. The data were corrected for fixed effects, and the accuracy of the method was determined based on the correlation of the ranks of predicted genomic breeding values (GBVs) in both models with the corrected phenotypic values. The analysis was repeated with a subset of SNP markers with largest absolute effects. The results were in agreement with the GBV prediction and the estimation of marker effects for both models for uncensored data and for normality. However, when considering censored data, the Cox model with a normal random effect (S1) was more appropriate. Since there was no agreement between the linear mixed model and the imputed data (L2) for the prediction of genomic values and the estimation of marker effects, the model S1 was considered superior as it took into account the latent variable and the censored data. Marker selection increased correlations between the ranks of predicted GBVs by the linear and Cox frailty models and the corrected phenotypic values, and 120 markers were required to increase the predictive ability for the characteristic analyzed.

  8. Prediction of siRNA potency using sparse logistic regression.

    PubMed

    Hu, Wei; Hu, John

    2014-06-01

    RNA interference (RNAi) can modulate gene expression at post-transcriptional as well as transcriptional levels. Short interfering RNA (siRNA) serves as a trigger for the RNAi gene inhibition mechanism, and therefore is a crucial intermediate step in RNAi. There have been extensive studies to identify the sequence characteristics of potent siRNAs. One such study built a linear model using LASSO (Least Absolute Shrinkage and Selection Operator) to measure the contribution of each siRNA sequence feature. This model is simple and interpretable, but it requires a large number of nonzero weights. We have introduced a novel technique, sparse logistic regression, to build a linear model using single-position specific nucleotide compositions which has the same prediction accuracy of the linear model based on LASSO. The weights in our new model share the same general trend as those in the previous model, but have only 25 nonzero weights out of a total 84 weights, a 54% reduction compared to the previous model. Contrary to the linear model based on LASSO, our model suggests that only a few positions are influential on the efficacy of the siRNA, which are the 5' and 3' ends and the seed region of siRNA sequences. We also employed sparse logistic regression to build a linear model using dual-position specific nucleotide compositions, a task LASSO is not able to accomplish well due to its high dimensional nature. Our results demonstrate the superiority of sparse logistic regression as a technique for both feature selection and regression over LASSO in the context of siRNA design.

  9. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.

    PubMed

    Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan

    2016-09-24

    This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.

  10. Estimation of Curve Tracing Time in Supercapacitor based PV Characterization

    NASA Astrophysics Data System (ADS)

    Basu Pal, Sudipta; Das Bhattacharya, Konika; Mukherjee, Dipankar; Paul, Debkalyan

    2017-08-01

    Smooth and noise-free characterisation of photovoltaic (PV) generators have been revisited with renewed interest in view of large size PV arrays making inroads into the urban sector of major developing countries. Such practice has recently been observed to be confronted by the use of a suitable data acquisition system and also the lack of a supporting theoretical analysis to justify the accuracy of curve tracing. However, the use of a selected bank of supercapacitors can mitigate the said problems to a large extent. Assuming a piecewise linear analysis of the V-I characteristics of a PV generator, an accurate analysis of curve plotting time has been possible. The analysis has been extended to consider the effect of equivalent series resistance of the supercapacitor leading to increased accuracy (90-95%) of curve plotting times.

  11. P300 Chinese input system based on Bayesian LDA.

    PubMed

    Jin, Jing; Allison, Brendan Z; Brunner, Clemens; Wang, Bei; Wang, Xingyu; Zhang, Jianhua; Neuper, Christa; Pfurtscheller, Gert

    2010-02-01

    A brain-computer interface (BCI) is a new communication channel between humans and computers that translates brain activity into recognizable command and control signals. Attended events can evoke P300 potentials in the electroencephalogram. Hence, the P300 has been used in BCI systems to spell, control cursors or robotic devices, and other tasks. This paper introduces a novel P300 BCI to communicate Chinese characters. To improve classification accuracy, an optimization algorithm (particle swarm optimization, PSO) is used for channel selection (i.e., identifying the best electrode configuration). The effects of different electrode configurations on classification accuracy were tested by Bayesian linear discriminant analysis offline. The offline results from 11 subjects show that this new P300 BCI can effectively communicate Chinese characters and that the features extracted from the electrodes obtained by PSO yield good performance.

  12. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    PubMed Central

    Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.

    2014-01-01

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518

  13. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind

    2014-08-15

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less

  14. Gender classification of running subjects using full-body kinematics

    NASA Astrophysics Data System (ADS)

    Williams, Christina M.; Flora, Jeffrey B.; Iftekharuddin, Khan M.

    2016-05-01

    This paper proposes novel automated gender classification of subjects while engaged in running activity. The machine learning techniques include preprocessing steps using principal component analysis followed by classification with linear discriminant analysis, and nonlinear support vector machines, and decision-stump with AdaBoost. The dataset consists of 49 subjects (25 males, 24 females, 2 trials each) all equipped with approximately 80 retroreflective markers. The trials are reflective of the subject's entire body moving unrestrained through a capture volume at a self-selected running speed, thus producing highly realistic data. The classification accuracy using leave-one-out cross validation for the 49 subjects is improved from 66.33% using linear discriminant analysis to 86.74% using the nonlinear support vector machine. Results are further improved to 87.76% by means of implementing a nonlinear decision stump with AdaBoost classifier. The experimental findings suggest that the linear classification approaches are inadequate in classifying gender for a large dataset with subjects running in a moderately uninhibited environment.

  15. A Feature-Free 30-Disease Pathological Brain Detection System by Linear Regression Classifier.

    PubMed

    Chen, Yi; Shao, Ying; Yan, Jie; Yuan, Ti-Fei; Qu, Yanwen; Lee, Elizabeth; Wang, Shuihua

    2017-01-01

    Alzheimer's disease patients are increasing rapidly every year. Scholars tend to use computer vision methods to develop automatic diagnosis system. (Background) In 2015, Gorji et al. proposed a novel method using pseudo Zernike moment. They tested four classifiers: learning vector quantization neural network, pattern recognition neural network trained by Levenberg-Marquardt, by resilient backpropagation, and by scaled conjugate gradient. This study presents an improved method by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Therefore, it can be used to detect Alzheimer's disease. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    PubMed

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  17. On the removal of boundary errors caused by Runge-Kutta integration of non-linear partial differential equations

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Gottlieb, David; Carpenter, Mark H.

    1994-01-01

    It has been previously shown that the temporal integration of hyperbolic partial differential equations (PDE's) may, because of boundary conditions, lead to deterioration of accuracy of the solution. A procedure for removal of this error in the linear case has been established previously. In the present paper we consider hyperbolic (PDE's) (linear and non-linear) whose boundary treatment is done via the SAT-procedure. A methodology is present for recovery of the full order of accuracy, and has been applied to the case of a 4th order explicit finite difference scheme.

  18. Nonparametric regression applied to quantitative structure-activity relationships

    PubMed

    Constans; Hirst

    2000-03-01

    Several nonparametric regressors have been applied to modeling quantitative structure-activity relationship (QSAR) data. The simplest regressor, the Nadaraya-Watson, was assessed in a genuine multivariate setting. Other regressors, the local linear and the shifted Nadaraya-Watson, were implemented within additive models--a computationally more expedient approach, better suited for low-density designs. Performances were benchmarked against the nonlinear method of smoothing splines. A linear reference point was provided by multilinear regression (MLR). Variable selection was explored using systematic combinations of different variables and combinations of principal components. For the data set examined, 47 inhibitors of dopamine beta-hydroxylase, the additive nonparametric regressors have greater predictive accuracy (as measured by the mean absolute error of the predictions or the Pearson correlation in cross-validation trails) than MLR. The use of principal components did not improve the performance of the nonparametric regressors over use of the original descriptors, since the original descriptors are not strongly correlated. It remains to be seen if the nonparametric regressors can be successfully coupled with better variable selection and dimensionality reduction in the context of high-dimensional QSARs.

  19. Pharmacokinetics and Tissue Distribution Study of Chlorogenic Acid from Lonicerae Japonicae Flos Following Oral Administrations in Rats

    PubMed Central

    Zhou, Yulu; Zhou, Ting; Pei, Qi; Liu, Shikun; Yuan, Hong

    2014-01-01

    Chlorogenic acid (ChA) is proposed as the major bioactive compounds of Lonicerae Japonicae Flos (LJF). Forty-two Wistar rats were randomly divided into seven groups to investigate the pharmacokinetics and tissue distribution of ChA, via oral administration of LJF extract, using ibuprofen as internal standard, employing a high performance liquid chromatography in conjunction with tandem mass spectrometry. Analytes were extracted from plasma samples and tissue homogenate by liquid–liquid extraction with acetonitrile, separated on a C 18 column by linear gradient elution, and detected by electrospray ionization mass spectrometry in negative selected multiple reaction monitoring mode. Our results successfully demonstrate that the method has satisfactory selectivity, linearity, extraction recovery, matrix effect, precision, accuracy, and stability. Using noncompartment model to study pharmacokinetics, profile revealed that ChA was rapidly absorbed and eliminated. Tissue study indicated that the highest level was observed in liver, followed by kidney, lung, heart, and spleen. In conclusion, this method was suitable for the study on pharmacokinetics and tissue distribution of ChA after oral administration. PMID:25140190

  20. Development and validation of an LC-MS method for determination of Karanjin in rat plasma: application to preclinical pharmacokinetics.

    PubMed

    Yi, Deliang; Wang, Zhihua; Yi, Longzhi

    2015-04-01

    A selective and sensitive liquid chromatography-mass spectrometry (MS) method was developed and validated for the determination of karanjin in rat plasma. The target analyte, together with the internal standard (warfarin), was extracted from rat plasma by liquid-liquid extraction with ethyl acetate. Chromatographic separation was performed on a ZORBAX SB-C18 column using a mixture of acetonitrile and 0.1% aqueous formic acid as the mobile phase with linear gradient elution. MS detection was performed on a single quadrupole MS by selected ion monitoring mode via a positive electrospray ionization source. The assay exhibited a linear dynamic range of 2.50-3,000 ng/mL for karanjin. The intra- and inter-day precision was <10.8%, and the intra- and inter-day accuracy was <9.2%. The validated method has been applied to the preclinical pharmacokinetic studies of karanjin following oral administration of 5, 10 and 20 mg/kg karanjin to rats. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Precision of dehydroascorbic acid quantitation with the use of the subtraction method--validation of HPLC-DAD method for determination of total vitamin C in food.

    PubMed

    Mazurek, Artur; Jamroz, Jerzy

    2015-04-15

    In food analysis, a method for determination of vitamin C should enable measuring of total content of ascorbic acid (AA) and dehydroascorbic acid (DHAA) because both chemical forms exhibit biological activity. The aim of the work was to confirm applicability of HPLC-DAD method for analysis of total content of vitamin C (TC) and ascorbic acid in various types of food by determination of validation parameters such as: selectivity, precision, accuracy, linearity and limits of detection and quantitation. The results showed that the method applied for determination of TC and AA was selective, linear and precise. Precision of DHAA determination by the subtraction method was also evaluated. It was revealed that the results of DHAA determination obtained by the subtraction method were not precise which resulted directly from the assumption of this method and the principles of uncertainty propagation. The proposed chromatographic method should be recommended for routine determinations of total vitamin C in various food. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Determination of chlorpyrifos and its metabolites in cells and culture media by liquid chromatography-electrospray ionization tandem mass spectrometry.

    PubMed

    Yang, Xiangkun; Wu, Xian; Brown, Kyle A; Le, Thao; Stice, Steven L; Bartlett, Michael G

    2017-09-15

    A sensitive method to simultaneously quantitate chlorpyrifos, chlorpyrifos oxon and the detoxified product 3,5,6-trichloro-2-pyridinol (TCP) was developed using either liquid-liquid extraction for culture media samples, or protein precipitation for cell samples. Multiple reaction monitoring in positive ion mode was applied for the detection of chlorpyrifos and chlorpyrifos oxon, and selected ion recording in negative mode was applied to detect TCP. The method provided linear ranges from 5 to 500, 0.2-20 and 20-2000ng/mL for media samples and from 0.5-50, 0.02-2 and 2-200ng/million cells for CPF, CPO and TCP, respectively. The method was validated using selectivity, linearity, precision, accuracy, recovery, stability and dilution tests. All relative standard deviations (RSDs) and relative errors (REs) for QC samples were within 15% (except for LLOQ, within 20%). This method has been successfully applied to study the neurotoxicity and metabolism of chlorpyrifos in a human neuronal model. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Suppressing flashes of items surrounding targets during calibration of a P300-based brain-computer interface improves performance

    NASA Astrophysics Data System (ADS)

    Frye, G. E.; Hauser, C. K.; Townsend, G.; Sellers, E. W.

    2011-04-01

    Since the introduction of the P300 brain-computer interface (BCI) speller by Farwell and Donchin in 1988, the speed and accuracy of the system has been significantly improved. Larger electrode montages and various signal processing techniques are responsible for most of the improvement in performance. New presentation paradigms have also led to improvements in bit rate and accuracy (e.g. Townsend et al (2010 Clin. Neurophysiol. 121 1109-20)). In particular, the checkerboard paradigm for online P300 BCI-based spelling performs well, has started to document what makes for a successful paradigm, and is a good platform for further experimentation. The current paper further examines the checkerboard paradigm by suppressing items which surround the target from flashing during calibration (i.e. the suppression condition). In the online feedback mode the standard checkerboard paradigm is used with a stepwise linear discriminant classifier derived from the suppression condition and one classifier derived from the standard checkerboard condition, counter-balanced. The results of this research demonstrate that using suppression during calibration produces significantly more character selections/min ((6.46) time between selections included) than the standard checkerboard condition (5.55), and significantly fewer target flashes are needed per selection in the SUP condition (5.28) as compared to the RCP condition (6.17). Moreover, accuracy in the SUP and RCP conditions remained equivalent (~90%). Mean theoretical bit rate was 53.62 bits/min in the suppression condition and 46.36 bits/min in the standard checkerboard condition (ns). Waveform morphology also showed significant differences in amplitude and latency.

  4. Genome wide selection in Citrus breeding.

    PubMed

    Gois, I B; Borém, A; Cristofani-Yaly, M; de Resende, M D V; Azevedo, C F; Bastianel, M; Novelli, V M; Machado, M A

    2016-10-17

    Genome wide selection (GWS) is essential for the genetic improvement of perennial species such as Citrus because of its ability to increase gain per unit time and to enable the efficient selection of characteristics with low heritability. This study assessed GWS efficiency in a population of Citrus and compared it with selection based on phenotypic data. A total of 180 individual trees from a cross between Pera sweet orange (Citrus sinensis Osbeck) and Murcott tangor (Citrus sinensis Osbeck x Citrus reticulata Blanco) were evaluated for 10 characteristics related to fruit quality. The hybrids were genotyped using 5287 DArT_seq TM (diversity arrays technology) molecular markers and their effects on phenotypes were predicted using the random regression - best linear unbiased predictor (rr-BLUP) method. The predictive ability, prediction bias, and accuracy of GWS were estimated to verify its effectiveness for phenotype prediction. The proportion of genetic variance explained by the markers was also computed. The heritability of the traits, as determined by markers, was 16-28%. The predictive ability of these markers ranged from 0.53 to 0.64, and the regression coefficients between predicted and observed phenotypes were close to unity. Over 35% of the genetic variance was accounted for by the markers. Accuracy estimates with GWS were lower than those obtained by phenotypic analysis; however, GWS was superior in terms of genetic gain per unit time. Thus, GWS may be useful for Citrus breeding as it can predict phenotypes early and accurately, and reduce the length of the selection cycle. This study demonstrates the feasibility of genomic selection in Citrus.

  5. RP-HPLC method development and validation for simultaneous estimation of atorvastatin calcium and pioglitazone hydrochloride in pharmaceutical dosage form.

    PubMed

    Peraman, Ramalingam; Mallikarjuna, Sasikala; Ammineni, Pravalika; Kondreddy, Vinod kumar

    2014-10-01

    A simple, selective, rapid, precise and economical reversed-phase high-performance liquid chromatographic (RP-HPLC) method has been developed for simultaneous estimation of atorvastatin calcium (ATV) and pioglitazone hydrochloride (PIO) from pharmaceutical formulation. The method is carried out on a C8 (25 cm × 4.6 mm i.d., 5 μm) column with a mobile phase consisting of acetonitrile (ACN):water (pH adjusted to 6.2 using o-phosphoric acid) in the ratio of 45:55 (v/v). The retention time of ATV and PIO is 4.1 and 8.1 min, respectively, with the flow rate of 1 mL/min with diode array detector detection at 232 nm. The linear regression analysis data from the linearity plot showed good linear relationship with a correlation coefficient (R(2)) value for ATV and PIO of 0.9998 and 0.9997 in the concentration range of 10-80 µg mL(-1), respectively. The relative standard deviation for intraday precision has been found to be <2.0%. The method is validated according to the ICH guidelines. The developed method is validated in terms of specificity, selectivity, accuracy, precision, linearity, limit of detection, limit of quantitation and solution stability. The proposed method can be used for simultaneous estimation of these drugs in marketed dosage forms. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    NASA Astrophysics Data System (ADS)

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  7. [Standard addition determination of impurities in Na2CrO4 by ICP-AES].

    PubMed

    Wang, Li-ping; Feng, Hai-tao; Dong, Ya-ping; Peng, Jiao-yu; Li, Wu; Shi, Hai-qin; Wang, Yong

    2015-02-01

    Coupled plasma atomic emission spectrometry (ICP-AES) was used to determine the trace impurities of Ca, Mg, Al, Fe and Si in industrial sodium chromate. Wavelengths of 167.079, 393.366, 259.940, 279.533 and 251.611 nm were selected as analytical lines for the determination of Al, Ca, Fe, Mg and Si, respectively. The analytical errors can be eliminated by adjusting the determined solution with high pure hydrochloric acid. Standard addition method was used to eliminate matrix effects. The linear correlation, detection limit, precision and recovery for the concerned trace impurities have been examined. The effect of standard addition method on the accuracy for the determination under the selected analytical lines has been studied in detail. The results show that the linear correlations of standard curves were very good (R2 = 0.9988 to 0.9996) under the determined conditions. Detection limits of these trace impurities were in the range of 0.0134 to 0.0280 mg x L(-1). Sample recoveries were within 97.30% to 107.50%, and relative standard deviations were lower than 5.86% for eleven repeated determinations. The detection limits and accuracies established by the experiment can meet the analytical requirements and the analytic procedure was used to determine trace impurities in sodium chromate by ion membrane electrolysis technique successfully. Due to sodium chromate can be changed into sodium dichromate and chromic acid by adding acids, the established method can be further used to monitor trace impurities in these compounds or other hexavalent chromium compounds.

  8. A Novel Continuous Blood Pressure Estimation Approach Based on Data Mining Techniques.

    PubMed

    Miao, Fen; Fu, Nan; Zhang, Yuan-Ting; Ding, Xiao-Rong; Hong, Xi; He, Qingyun; Li, Ye

    2017-11-01

    Continuous blood pressure (BP) estimation using pulse transit time (PTT) is a promising method for unobtrusive BP measurement. However, the accuracy of this approach must be improved for it to be viable for a wide range of applications. This study proposes a novel continuous BP estimation approach that combines data mining techniques with a traditional mechanism-driven model. First, 14 features derived from simultaneous electrocardiogram and photoplethysmogram signals were extracted for beat-to-beat BP estimation. A genetic algorithm-based feature selection method was then used to select BP indicators for each subject. Multivariate linear regression and support vector regression were employed to develop the BP model. The accuracy and robustness of the proposed approach were validated for static, dynamic, and follow-up performance. Experimental results based on 73 subjects showed that the proposed approach exhibited excellent accuracy in static BP estimation, with a correlation coefficient and mean error of 0.852 and -0.001 ± 3.102 mmHg for systolic BP, and 0.790 and -0.004 ± 2.199 mmHg for diastolic BP. Similar performance was observed for dynamic BP estimation. The robustness results indicated that the estimation accuracy was lower by a certain degree one day after model construction but was relatively stable from one day to six months after construction. The proposed approach is superior to the state-of-the-art PTT-based model for an approximately 2-mmHg reduction in the standard derivation at different time intervals, thus providing potentially novel insights for cuffless BP estimation.

  9. A Multi Directional Perfect Reconstruction Filter Bank Designed with 2-D Eigenfilter Approach: Application to Ultrasound Speckle Reduction.

    PubMed

    Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S

    2017-02-01

    B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.

  10. Design and manufacture of customized dental implants by using reverse engineering and selective laser melting technology.

    PubMed

    Chen, Jianyu; Zhang, Zhiguang; Chen, Xianshuai; Zhang, Chunyu; Zhang, Gong; Xu, Zhewu

    2014-11-01

    Recently a new therapeutic concept of patient-specific implant dentistry has been advanced based on computer-aided design/computer-aided manufacturing technology. However, a comprehensive study of the design and 3-dimensional (3D) printing of the customized implants, their mechanical properties, and their biomechanical behavior is lacking. The purpose of this study was to evaluate the mechanical and biomechanical performance of a novel custom-made dental implant fabricated by the selective laser melting technique with simulation and in vitro experimental studies. Two types of customized implants were designed by using reverse engineering: a root-analog implant and a root-analog threaded implant. The titanium implants were printed layer by layer with the selective laser melting technique. The relative density, surface roughness, tensile properties, bend strength, and dimensional accuracy of the specimens were evaluated. Nonlinear and linear finite element analysis and experimental studies were used to investigate the stress distribution, micromotion, and primary stability of the implants. Selective laser melting 3D printing technology was able to reproduce the customized implant designs and produce high density and strength and adequate dimensional accuracy. Better stress distribution and lower maximum micromotions were observed for the root-analog threaded implant model than for the root-analog implant model. In the experimental tests, the implant stability quotient and pull-out strength of the 2 types of implants indicated that better primary stability can be obtained with a root-analog threaded implant design. Selective laser melting proved to be an efficient means of printing fully dense customized implants with high strength and sufficient dimensional accuracy. Adding the threaded characteristic to the customized root-analog threaded implant design maintained the approximate geometry of the natural root and exhibited better stress distribution and primary stability. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  11. Improved water-level forecasting for the Northwest European Shelf and North Sea through direct modelling of tide, surge and non-linear interaction

    NASA Astrophysics Data System (ADS)

    Zijl, Firmijn; Verlaan, Martin; Gerritsen, Herman

    2013-07-01

    In real-time operational coastal forecasting systems for the northwest European shelf, the representation accuracy of tide-surge models commonly suffers from insufficiently accurate tidal representation, especially in shallow near-shore areas with complex bathymetry and geometry. Therefore, in conventional operational systems, the surge component from numerical model simulations is used, while the harmonically predicted tide, accurately known from harmonic analysis of tide gauge measurements, is added to forecast the full water-level signal at tide gauge locations. Although there are errors associated with this so-called astronomical correction (e.g. because of the assumption of linearity of tide and surge), for current operational models, astronomical correction has nevertheless been shown to increase the representation accuracy of the full water-level signal. The simulated modulation of the surge through non-linear tide-surge interaction is affected by the poor representation of the tide signal in the tide-surge model, which astronomical correction does not improve. Furthermore, astronomical correction can only be applied to locations where the astronomic tide is known through a harmonic analysis of in situ measurements at tide gauge stations. This provides a strong motivation to improve both tide and surge representation of numerical models used in forecasting. In the present paper, we propose a new generation tide-surge model for the northwest European Shelf (DCSMv6). This is the first application on this scale in which the tidal representation is such that astronomical correction no longer improves the accuracy of the total water-level representation and where, consequently, the straightforward direct model forecasting of total water levels is better. The methodology applied to improve both tide and surge representation of the model is discussed, with emphasis on the use of satellite altimeter data and data assimilation techniques for reducing parameter uncertainty. Historic DCSMv6 model simulations are compared against shelf wide observations for a full calendar year. For a selection of stations, these results are compared to those with astronomical correction, which confirms that the tide representation in coastal regions has sufficient accuracy, and that forecasting total water levels directly yields superior results.

  12. Rapid screening of drugs of abuse in human urine by high-performance liquid chromatography coupled with high resolution and high mass accuracy hybrid linear ion trap-Orbitrap mass spectrometry.

    PubMed

    Li, Xiaowen; Shen, Baohua; Jiang, Zheng; Huang, Yi; Zhuo, Xianyi

    2013-08-09

    A novel analytical toxicology method has been developed for the analysis of drugs of abuse in human urine by using a high resolution and high mass accuracy hybrid linear ion trap-Orbitrap mass spectrometer (LTQ-Orbitrap-MS). This method allows for the detection of different drugs of abuse, including amphetamines, cocaine, opiate alkaloids, cannabinoids, hallucinogens and their metabolites. After solid-phase extraction with Oasis HLB cartridges, spiked urine samples were analysed by HPLC/LTQ-Orbitrap-MS using an electrospray interface in positive ionisation mode, with resolving power of 30,000 full width at half maximum (FWHM). Gradient elution off of a Hypersil Gold PFP column (50mm×2.1mm) allowed to resolve 65 target compounds and 3 internal standards in a total chromatographic run time of 20min. Validation of this method consisted of confirmation of identity, selectivity, linearity, limit of detection (LOD), lowest limits of quantification (LLOQ), accuracy, precision, extraction recovery and matrix effect. The regression coefficients (r(2)) for the calibration curves (LLOQ - 100ng/mL) in the study were ≥0.99. The LODs for 65 validated compounds were better than 5ng/ml except for 4 compounds. The relative standard deviation (RSD), which was used to estimate repeatability at three concentrations, was always less than 15%. The recovery of extraction and matrix effects were above 50 and 70%, respectively. Mass accuracy was always better than 2ppm, corresponding to a maximum mass error of 0.8 millimass units (mmu). The accurate masses of characteristic fragments were obtained by collisional experiments for a more reliable identification of the analytes. Automated data analysis and reporting were performed using ToxID software with an exact mass database. This procedure was then successfully applied to analyse drugs of abuse in a real urine sample from subject who was assumed to be drug addict. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  13. Attractor reconstruction for non-linear systems: a methodological note

    USGS Publications Warehouse

    Nichols, J.M.; Nichols, J.D.

    2001-01-01

    Attractor reconstruction is an important step in the process of making predictions for non-linear time-series and in the computation of certain invariant quantities used to characterize the dynamics of such series. The utility of computed predictions and invariant quantities is dependent on the accuracy of attractor reconstruction, which in turn is determined by the methods used in the reconstruction process. This paper suggests methods by which the delay and embedding dimension may be selected for a typical delay coordinate reconstruction. A comparison is drawn between the use of the autocorrelation function and mutual information in quantifying the delay. In addition, a false nearest neighbor (FNN) approach is used in minimizing the number of delay vectors needed. Results highlight the need for an accurate reconstruction in the computation of the Lyapunov spectrum and in prediction algorithms.

  14. Determination of some phenolic compounds in red wine by RP-HPLC: method development and validation.

    PubMed

    Burin, Vívian Maria; Arcari, Stefany Grützmann; Costa, Léa Luzia Freitas; Bordignon-Luiz, Marilde T

    2011-09-01

    A methodology employing reversed-phase high-performance liquid chromatography (RP-HPLC) was developed and validated for simultaneous determination of five phenolic compounds in red wine. The chromatographic separation was carried out in a C(18) column with water acidify with acetic acid (pH 2.6) (solvent A) and 20% solvent A and 80% acetonitrile (solvent B) as the mobile phase. The validation parameters included: selectivity, linearity, range, limits of detection and quantitation, precision and accuracy, using an internal standard. All calibration curves were linear (R(2) > 0.999) within the range, and good precision (RSD < 2.6%) and recovery (80-120%) was obtained for all compounds. This method was applied to quantify phenolics in red wine samples from Santa Catarina State, Brazil, and good separation peaks for phenolic compounds in these wines were observed.

  15. Fundamental Analysis of the Linear Multiple Regression Technique for Quantification of Water Quality Parameters from Remote Sensing Data. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H., III

    1977-01-01

    Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.

  16. Graphite nanocomposites sensor for multiplex detection of antioxidants in food.

    PubMed

    Ng, Khan Loon; Tan, Guan Huat; Khor, Sook Mei

    2017-12-15

    Butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT), and tert-butylhydroquinone (TBHQ) are synthetic antioxidants used in the food industry. Herein, we describe the development of a novel graphite nanocomposite-based electrochemical sensor for the multiplex detection and measurement of BHA, BHT, and TBHQ levels in complex food samples using a linear sweep voltammetry technique. Moreover, our newly established analytical method exhibited good sensitivity, limit of detection, limit of quantitation, and selectivity. The accuracy and reliability of analytical results were challenged by method validation and comparison with the results of the liquid chromatography method, where a linear correlation of more than 0.99 was achieved. The addition of sodium dodecyl sulfate as supporting additive further enhanced the LSV response (anodic peak current, I pa ) of BHA and BHT by 2- and 20-times, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model.

    PubMed

    Plotnikov, Nikolay V

    2014-08-12

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.

  18. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model

    PubMed Central

    2015-01-01

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268

  19. Stochastic subset selection for learning with kernel machines.

    PubMed

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  20. Heritability and phenotypic variation of canine hip dysplasia radiographic traits in a cohort of Australian German shepherd dogs.

    PubMed

    Wilson, Bethany J; Nicholas, Frank W; James, John W; Wade, Claire M; Tammen, Imke; Raadsma, Herman W; Castle, Kao; Thomson, Peter C

    2012-01-01

    Canine Hip Dysplasia (CHD) is a common, painful and debilitating orthopaedic disorder of dogs with a partly genetic, multifactorial aetiology. Worldwide, potential breeding dogs are evaluated for CHD using radiographically based screening schemes such as the nine ordinally-scored British Veterinary Association Hip Traits (BVAHTs). The effectiveness of selective breeding based on screening results requires that a significant proportion of the phenotypic variation is caused by the presence of favourable alleles segregating in the population. This proportion, heritability, was measured in a cohort of 13,124 Australian German Shepherd Dogs born between 1976 and 2005, displaying phenotypic variation for BVAHTs, using ordinal, linear and binary mixed models fitted by a Restricted Maximum Likelihood method. Heritability estimates for the nine BVAHTs ranged from 0.14-0.24 (ordinal models), 0.14-0.25 (linear models) and 0.12-0.40 (binary models). Heritability for the summed BVAHT phenotype was 0.30 ± 0.02. The presence of heritable variation demonstrates that selection based on BVAHTs has the potential to improve BVAHT scores in the population. Assuming a genetic correlation between BVAHT scores and CHD-related pain and dysfunction, the welfare of Australian German Shepherds can be improved by continuing to consider BVAHT scores in the selection of breeding dogs, but that as heritability values are only moderate in magnitude the accuracy, and effectiveness, of selection could be improved by the use of Estimated Breeding Values in preference to solely phenotype based selection of breeding animals.

  1. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.

    PubMed

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin

    2017-02-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.

  2. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data

    PubMed Central

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin

    2016-01-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324

  3. A hybrid algorithm for selecting head-related transfer function based on similarity of anthropometric structures

    NASA Astrophysics Data System (ADS)

    Zeng, Xiang-Yang; Wang, Shu-Guang; Gao, Li-Ping

    2010-09-01

    As the basic data for virtual auditory technology, head-related transfer function (HRTF) has many applications in the areas of room acoustic modeling, spatial hearing and multimedia. How to individualize HRTF fast and effectively has become an opening problem at present. Based on the similarity and relativity of anthropometric structures, a hybrid HRTF customization algorithm, which has combined the method of principal component analysis (PCA), multiple linear regression (MLR) and database matching (DM), has been presented in this paper. The HRTFs selected by both the best match and the worst match have been applied into obtaining binaurally auralized sounds, which are then used for subjective listening experiments and the results are compared. For the area in the horizontal plane, the localization results have shown that the selection of HRTFs can enhance the localization accuracy and can also abate the problem of front-back confusion.

  4. Accuracy of predicting genomic breeding values for residual feed intake in Angus and Charolais beef cattle.

    PubMed

    Chen, L; Schenkel, F; Vinsky, M; Crews, D H; Li, C

    2013-10-01

    In beef cattle, phenotypic data that are difficult and/or costly to measure, such as feed efficiency, and DNA marker genotypes are usually available on a small number of animals of different breeds or populations. To achieve a maximal accuracy of genomic prediction using the phenotype and genotype data, strategies for forming a training population to predict genomic breeding values (GEBV) of the selection candidates need to be evaluated. In this study, we examined the accuracy of predicting GEBV for residual feed intake (RFI) based on 522 Angus and 395 Charolais steers genotyped on SNP with the Illumina Bovine SNP50 Beadchip for 3 training population forming strategies: within breed, across breed, and by pooling data from the 2 breeds (i.e., combined). Two other scenarios with the training and validation data split by birth year and by sire family within a breed were also investigated to assess the impact of genetic relationships on the accuracy of genomic prediction. Three statistical methods including the best linear unbiased prediction with the relationship matrix defined based on the pedigree (PBLUP), based on the SNP genotypes (GBLUP), and a Bayesian method (BayesB) were used to predict the GEBV. The results showed that the accuracy of the GEBV prediction was the highest when the prediction was within breed and when the validation population had greater genetic relationships with the training population, with a maximum of 0.58 for Angus and 0.64 for Charolais. The within-breed prediction accuracies dropped to 0.29 and 0.38, respectively, when the validation populations had a minimal pedigree link with the training population. When the training population of a different breed was used to predict the GEBV of the validation population, that is, across-breed genomic prediction, the accuracies were further reduced to 0.10 to 0.22, depending on the prediction method used. Pooling data from the 2 breeds to form the training population resulted in accuracies increased to 0.31 and 0.43, respectively, for the Angus and Charolais validation populations. The results suggested that the genetic relationship of selection candidates with the training population has a greater impact on the accuracy of GEBV using the Illumina Bovine SNP50 Beadchip. Pooling data from different breeds to form the training population will improve the accuracy of across breed genomic prediction for RFI in beef cattle.

  5. A novel approach for prediction of tacrolimus blood concentration in liver transplantation patients in the intensive care unit through support vector regression.

    PubMed

    Van Looy, Stijn; Verplancke, Thierry; Benoit, Dominique; Hoste, Eric; Van Maele, Georges; De Turck, Filip; Decruyenaere, Johan

    2007-01-01

    Tacrolimus is an important immunosuppressive drug for organ transplantation patients. It has a narrow therapeutic range, toxic side effects, and a blood concentration with wide intra- and interindividual variability. Hence, it is of the utmost importance to monitor tacrolimus blood concentration, thereby ensuring clinical effect and avoiding toxic side effects. Prediction models for tacrolimus blood concentration can improve clinical care by optimizing monitoring of these concentrations, especially in the initial phase after transplantation during intensive care unit (ICU) stay. This is the first study in the ICU in which support vector machines, as a new data modeling technique, are investigated and tested in their prediction capabilities of tacrolimus blood concentration. Linear support vector regression (SVR) and nonlinear radial basis function (RBF) SVR are compared with multiple linear regression (MLR). Tacrolimus blood concentrations, together with 35 other relevant variables from 50 liver transplantation patients, were extracted from our ICU database. This resulted in a dataset of 457 blood samples, on average between 9 and 10 samples per patient, finally resulting in a database of more than 16,000 data values. Nonlinear RBF SVR, linear SVR, and MLR were performed after selection of clinically relevant input variables and model parameters. Differences between observed and predicted tacrolimus blood concentrations were calculated. Prediction accuracy of the three methods was compared after fivefold cross-validation (Friedman test and Wilcoxon signed rank analysis). Linear SVR and nonlinear RBF SVR had mean absolute differences between observed and predicted tacrolimus blood concentrations of 2.31 ng/ml (standard deviation [SD] 2.47) and 2.38 ng/ml (SD 2.49), respectively. MLR had a mean absolute difference of 2.73 ng/ml (SD 3.79). The difference between linear SVR and MLR was statistically significant (p < 0.001). RBF SVR had the advantage of requiring only 2 input variables to perform this prediction in comparison to 15 and 16 variables needed by linear SVR and MLR, respectively. This is an indication of the superior prediction capability of nonlinear SVR. Prediction of tacrolimus blood concentration with linear and nonlinear SVR was excellent, and accuracy was superior in comparison with an MLR model.

  6. An isotope dilution ultra high performance liquid chromatography-tandem mass spectrometry method for the simultaneous determination of sugars and humectants in tobacco products.

    PubMed

    Wang, Liqun; Cardenas, Roberto Bravo; Watson, Clifford

    2017-09-08

    CDC's Division of Laboratory Sciences developed and validated a new method for the simultaneous detection and measurement of 11 sugars, alditols and humectants in tobacco products. The method uses isotope dilution ultra high performance liquid chromatography coupled with tandem mass spectrometry (UHPLC-MS/MS) and has demonstrated high sensitivity, selectivity, throughput and accuracy, with recoveries ranging from 90% to 113%, limits of detection ranging from 0.0002 to 0.0045μg/mL and coefficients of variation (CV%) ranging from 1.4 to 14%. Calibration curves for all analytes were linear with linearity R 2 values greater than 0.995. Quantification of tobacco components is necessary to characterize tobacco product components and their potential effects on consumer appeal, smoke chemistry and toxicology, and to potentially help distinguish tobacco product categories. The researchers analyzed a variety of tobacco products (e.g., cigarettes, little cigars, cigarillos) using the new method and documented differences in the abundance of selected analytes among product categories. Specifically, differences were detected in levels of selected sugars found in little cigars and cigarettes, which could help address appeal potential and have utility when product category is unknown, unclear, or miscategorized. Copyright © 2017. Published by Elsevier B.V.

  7. Weighted linear least squares estimation of diffusion MRI parameters: strengths, limitations, and pitfalls.

    PubMed

    Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Leemans, Alexander; Jeurissen, Ben

    2013-11-01

    Linear least squares estimators are widely used in diffusion MRI for the estimation of diffusion parameters. Although adding proper weights is necessary to increase the precision of these linear estimators, there is no consensus on how to practically define them. In this study, the impact of the commonly used weighting strategies on the accuracy and precision of linear diffusion parameter estimators is evaluated and compared with the nonlinear least squares estimation approach. Simulation and real data experiments were done to study the performance of the weighted linear least squares estimators with weights defined by (a) the squares of the respective noisy diffusion-weighted signals; and (b) the squares of the predicted signals, which are reconstructed from a previous estimate of the diffusion model parameters. The negative effect of weighting strategy (a) on the accuracy of the estimator was surprisingly high. Multi-step weighting strategies yield better performance and, in some cases, even outperformed the nonlinear least squares estimator. If proper weighting strategies are applied, the weighted linear least squares approach shows high performance characteristics in terms of accuracy/precision and may even be preferred over nonlinear estimation methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. New machine-learning algorithms for prediction of Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Mandal, Indrajit; Sairam, N.

    2014-03-01

    This article presents an enhanced prediction accuracy of diagnosis of Parkinson's disease (PD) to prevent the delay and misdiagnosis of patients using the proposed robust inference system. New machine-learning methods are proposed and performance comparisons are based on specificity, sensitivity, accuracy and other measurable parameters. The robust methods of treating Parkinson's disease (PD) includes sparse multinomial logistic regression, rotation forest ensemble with support vector machines and principal components analysis, artificial neural networks, boosting methods. A new ensemble method comprising of the Bayesian network optimised by Tabu search algorithm as classifier and Haar wavelets as projection filter is used for relevant feature selection and ranking. The highest accuracy obtained by linear logistic regression and sparse multinomial logistic regression is 100% and sensitivity, specificity of 0.983 and 0.996, respectively. All the experiments are conducted over 95% and 99% confidence levels and establish the results with corrected t-tests. This work shows a high degree of advancement in software reliability and quality of the computer-aided diagnosis system and experimentally shows best results with supportive statistical inference.

  9. Singular value decomposition: a diagnostic tool for ill-posed inverse problems in optical computed tomography

    NASA Astrophysics Data System (ADS)

    Lanen, Theo A.; Watt, David W.

    1995-10-01

    Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.

  10. Prognostic scores in oesophageal or gastric variceal bleeding.

    PubMed

    Ohmann, C; Stöltzing, H; Wins, L; Busch, E; Thon, K

    1990-05-01

    Numerous scoring systems have been developed for the prediction of outcome of variceal bleeding; however, only a few have been evaluated adequately. The object of this study was to improve the classical Child-Pugh score (CPS) and to test other scores from the literature. Patients (n = 82) with endoscopically confirmed variceal bleeding and long-term sclerotherapy were included in the study. Linear logistic regression (LR) was applied to different sets of prognostic variables with regard to 30-day mortality. In addition, scores from the literature were evaluated on the data set. Performance was measured by the accuracy and receiver-operating characteristic curves. The application of LR to all five CPS variables (accuracy, 80%) was superior to the classical CPS (70%). LR with selection from the CPS variables or from other sets of variables resulted in no improvement. Compared with CPS only three scores from the literature, mainly based on subsets of the CPS variables, showed an improved accuracy. It is concluded that CPS is still a good scoring system; however, it can be improved by statistical analysis using the same variables.

  11. Genomic Selection in Multi-environment Crop Trials.

    PubMed

    Oakey, Helena; Cullis, Brian; Thompson, Robin; Comadran, Jordi; Halpin, Claire; Waugh, Robbie

    2016-05-03

    Genomic selection in crop breeding introduces modeling challenges not found in animal studies. These include the need to accommodate replicate plants for each line, consider spatial variation in field trials, address line by environment interactions, and capture nonadditive effects. Here, we propose a flexible single-stage genomic selection approach that resolves these issues. Our linear mixed model incorporates spatial variation through environment-specific terms, and also randomization-based design terms. It considers marker, and marker by environment interactions using ridge regression best linear unbiased prediction to extend genomic selection to multiple environments. Since the approach uses the raw data from line replicates, the line genetic variation is partitioned into marker and nonmarker residual genetic variation (i.e., additive and nonadditive effects). This results in a more precise estimate of marker genetic effects. Using barley height data from trials, in 2 different years, of up to 477 cultivars, we demonstrate that our new genomic selection model improves predictions compared to current models. Analyzing single trials revealed improvements in predictive ability of up to 5.7%. For the multiple environment trial (MET) model, combining both year trials improved predictive ability up to 11.4% compared to a single environment analysis. Benefits were significant even when fewer markers were used. Compared to a single-year standard model run with 3490 markers, our partitioned MET model achieved the same predictive ability using between 500 and 1000 markers depending on the trial. Our approach can be used to increase accuracy and confidence in the selection of the best lines for breeding and/or, to reduce costs by using fewer markers. Copyright © 2016 Oakey et al.

  12. Space Mirror Alignment System

    NASA Technical Reports Server (NTRS)

    Jau, Bruno M.; McKinney, Colin; Smythe, Robert F.; Palmer, Dean L.

    2011-01-01

    An optical alignment mirror mechanism (AMM) has been developed with angular positioning accuracy of +/-0.2 arcsec. This requires the mirror s linear positioning actuators to have positioning resolutions of +/-112 nm to enable the mirror to meet the angular tip/tilt accuracy requirement. Demonstrated capabilities are 0.1 arc-sec angular mirror positioning accuracy, which translates into linear positioning resolutions at the actuator of 50 nm. The mechanism consists of a structure with sets of cross-directional flexures that enable the mirror s tip and tilt motion, a mirror with its kinematic mount, and two linear actuators. An actuator comprises a brushless DC motor, a linear ball screw, and a piezoelectric brake that holds the mirror s position while the unit is unpowered. An interferometric linear position sensor senses the actuator s position. The AMMs were developed for an Astrometric Beam Combiner (ABC) optical bench, which is part of an interferometer development. Custom electronics were also developed to accommodate the presence of multiple AMMs within the ABC and provide a compact, all-in-one solution to power and control the AMMs.

  13. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.

  14. Discrimination of serum Raman spectroscopy between normal and colorectal cancer

    NASA Astrophysics Data System (ADS)

    Li, Xiaozhou; Yang, Tianyue; Yu, Ting; Li, Siqi

    2011-07-01

    Raman spectroscopy of tissues has been widely studied for the diagnosis of various cancers, but biofluids were seldom used as the analyte because of the low concentration. Herein, serum of 30 normal people, 46 colon cancer, and 44 rectum cancer patients were measured Raman spectra and analyzed. The information of Raman peaks (intensity and width) and that of the fluorescence background (baseline function coefficients) were selected as parameters for statistical analysis. Principal component regression (PCR) and partial least square regression (PLSR) were used on the selected parameters separately to see the performance of the parameters. PCR performed better than PLSR in our spectral data. Then linear discriminant analysis (LDA) was used on the principal components (PCs) of the two regression method on the selected parameters, and a diagnostic accuracy of 88% and 83% were obtained. The conclusion is that the selected features can maintain the information of original spectra well and Raman spectroscopy of serum has the potential for the diagnosis of colorectal cancer.

  15. Accuracy enhancement of point triangulation probes for linear displacement measurement

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Chan; Kim, Jong-Ahn; Oh, SeBaek; Kim, Soo Hyun; Kwak, Yoon Keun

    2000-03-01

    Point triangulation probes (PTBs) fall into a general category of noncontact height or displacement measurement devices. PTBs are widely used for their simple structure, high resolution, and long operating range. However, there are several factors that must be taken into account in order to obtain high accuracy and reliability; measurement errors from inclinations of an object surface, probe signal fluctuations generated by speckle effects, power variation of a light source, electronic noises, and so on. In this paper, we propose a novel signal processing algorithm, named as EASDF (expanded average square difference function), for a newly designed PTB which is composed of an incoherent source (LED), a line scan array detector, a specially selected diffuse reflecting surface, and several optical components. The EASDF, which is a modified correlation function, is able to calculate displacement between the probe and the object surface effectively even if there are inclinations, power fluctuations, and noises.

  16. Genomic Prediction of Testcross Performance in Canola (Brassica napus)

    PubMed Central

    Jan, Habib U.; Abbadi, Amine; Lücke, Sophie; Nichols, Richard A.; Snowdon, Rod J.

    2016-01-01

    Genomic selection (GS) is a modern breeding approach where genome-wide single-nucleotide polymorphism (SNP) marker profiles are simultaneously used to estimate performance of untested genotypes. In this study, the potential of genomic selection methods to predict testcross performance for hybrid canola breeding was applied for various agronomic traits based on genome-wide marker profiles. A total of 475 genetically diverse spring-type canola pollinator lines were genotyped at 24,403 single-copy, genome-wide SNP loci. In parallel, the 950 F1 testcross combinations between the pollinators and two representative testers were evaluated for a number of important agronomic traits including seedling emergence, days to flowering, lodging, oil yield and seed yield along with essential seed quality characters including seed oil content and seed glucosinolate content. A ridge-regression best linear unbiased prediction (RR-BLUP) model was applied in combination with 500 cross-validations for each trait to predict testcross performance, both across the whole population as well as within individual subpopulations or clusters, based solely on SNP profiles. Subpopulations were determined using multidimensional scaling and K-means clustering. Genomic prediction accuracy across the whole population was highest for seed oil content (0.81) followed by oil yield (0.75) and lowest for seedling emergence (0.29). For seed yieId, seed glucosinolate, lodging resistance and days to onset of flowering (DTF), prediction accuracies were 0.45, 0.61, 0.39 and 0.56, respectively. Prediction accuracies could be increased for some traits by treating subpopulations separately; a strategy which only led to moderate improvements for some traits with low heritability, like seedling emergence. No useful or consistent increase in accuracy was obtained by inclusion of a population substructure covariate in the model. Testcross performance prediction using genome-wide SNP markers shows considerable potential for pre-selection of promising hybrid combinations prior to resource-intensive field testing over multiple locations and years. PMID:26824924

  17. Genomic and pedigree-based prediction for leaf, stem, and stripe rust resistance in wheat.

    PubMed

    Juliana, Philomin; Singh, Ravi P; Singh, Pawan K; Crossa, Jose; Huerta-Espino, Julio; Lan, Caixia; Bhavani, Sridhar; Rutkoski, Jessica E; Poland, Jesse A; Bergstrom, Gary C; Sorrells, Mark E

    2017-07-01

    Genomic prediction for seedling and adult plant resistance to wheat rusts was compared to prediction using few markers as fixed effects in a least-squares approach and pedigree-based prediction. The unceasing plant-pathogen arms race and ephemeral nature of some rust resistance genes have been challenging for wheat (Triticum aestivum L.) breeding programs and farmers. Hence, it is important to devise strategies for effective evaluation and exploitation of quantitative rust resistance. One promising approach that could accelerate gain from selection for rust resistance is 'genomic selection' which utilizes dense genome-wide markers to estimate the breeding values (BVs) for quantitative traits. Our objective was to compare three genomic prediction models including genomic best linear unbiased prediction (GBLUP), GBLUP A that was GBLUP with selected loci as fixed effects and reproducing kernel Hilbert spaces-markers (RKHS-M) with least-squares (LS) approach, RKHS-pedigree (RKHS-P), and RKHS markers and pedigree (RKHS-MP) to determine the BVs for seedling and/or adult plant resistance (APR) to leaf rust (LR), stem rust (SR), and stripe rust (YR). The 333 lines in the 45th IBWSN and the 313 lines in the 46th IBWSN were genotyped using genotyping-by-sequencing and phenotyped in replicated trials. The mean prediction accuracies ranged from 0.31-0.74 for LR seedling, 0.12-0.56 for LR APR, 0.31-0.65 for SR APR, 0.70-0.78 for YR seedling, and 0.34-0.71 for YR APR. For most datasets, the RKHS-MP model gave the highest accuracies, while LS gave the lowest. GBLUP, GBLUP A, RKHS-M, and RKHS-P models gave similar accuracies. Using genome-wide marker-based models resulted in an average of 42% increase in accuracy over LS. We conclude that GS is a promising approach for improvement of quantitative rust resistance and can be implemented in the breeding pipeline.

  18. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    PubMed

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.

  19. Optimal Combinations of Diagnostic Tests Based on AUC.

    PubMed

    Huang, Xin; Qin, Gengsheng; Fang, Yixin

    2011-06-01

    When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.

  20. Development and application of a high-performance liquid chromatography method using monolithic columns for the analysis of ecstasy tablets.

    PubMed

    Mc Fadden, Kim; Gillespie, John; Carney, Brian; O'Driscoll, Daniel

    2006-07-07

    A rapid and selective HPLC method using monolithic columns was developed for the separation and quantification of the principal amphetamines in ecstasy tablets. Three monolithic (Chromolith RP18e) columns of different lengths (25, 50 and 100 mm) were assessed. Validation studies including linearity, selectivity, precision, accuracy and limit of detection and quantification were carried out using the Chromolith SpeedROD, RP-18e, 50 mm x 4.6 mm column. Column backpressure and van Deemter plots demonstrated that monolithic columns provide higher efficiency at higher flow rates when compared to particulate columns without the loss of peak resolution. Application of the monolithic column to a large number of ecstasy tablets seized in Ireland ensured its suitability for the routine analysis of ecstasy tablets.

  1. A low-cost colorimeter.

    PubMed

    Jones, N B; Riley, C; Sheya, M S; Hosseinmardi, M M

    1984-01-01

    A need for a colorimeter with low capital and maintenance costs has been suggested for countries with foreign exchange problems and no local medical instrumentation industry. This paper puts forward a design for such a device based on a domestic light-bulb, photographic filters and photovoltaic cells. The principle of the design is the use of a balancing technique involving twin light paths for test solution and reference solution and an electronic bridge circuit. It is shown that proper selection of the components will allow the cost objectives to be met and also provide acceptable linearity, precision, accuracy and repeatability.

  2. Accuracy and reliability of stitched cone-beam computed tomography images.

    PubMed

    Egbert, Nicholas; Cagna, David R; Ahuja, Swati; Wicks, Russell A

    2015-03-01

    This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets.

  3. A comparison between different finite elements for elastic and aero-elastic analyses.

    PubMed

    Mahran, Mohamed; ELsabbagh, Adel; Negm, Hani

    2017-11-01

    In the present paper, a comparison between five different shell finite elements, including the Linear Triangular Element, Linear Quadrilateral Element, Linear Quadrilateral Element based on deformation modes, 8-node Quadrilateral Element, and 9-Node Quadrilateral Element was presented. The shape functions and the element equations related to each element were presented through a detailed mathematical formulation. Additionally, the Jacobian matrix for the second order derivatives was simplified and used to derive each element's strain-displacement matrix in bending. The elements were compared using carefully selected elastic and aero-elastic bench mark problems, regarding the number of elements needed to reach convergence, the resulting accuracy, and the needed computation time. The best suitable element for elastic free vibration analysis was found to be the Linear Quadrilateral Element with deformation-based shape functions, whereas the most suitable element for stress analysis was the 8-Node Quadrilateral Element, and the most suitable element for aero-elastic analysis was the 9-Node Quadrilateral Element. Although the linear triangular element was the last choice for modal and stress analyses, it establishes more accurate results in aero-elastic analyses, however, with much longer computation time. Additionally, the nine-node quadrilateral element was found to be the best choice for laminated composite plates analysis.

  4. On isocentre adjustment and quality control in linear accelerator based radiosurgery with circular collimators and room lasers.

    PubMed

    Treuer, H; Hoevels, M; Luyken, K; Gierich, A; Kocher, M; Müller, R P; Sturm, V

    2000-08-01

    We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems.

  5. Effect of genetic architecture on the prediction accuracy of quantitative traits in samples of unrelated individuals.

    PubMed

    Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C

    2018-06-01

    Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.

  6. Feature weight estimation for gene selection: a local hyperlinear learning approach

    PubMed Central

    2014-01-01

    Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071

  7. Index Fund Selections with GAs and Classifications Based on Turnover

    NASA Astrophysics Data System (ADS)

    Orito, Yukiko; Motoyama, Takaaki; Yamazaki, Genji

    It is well known that index fund selections are important for the risk hedge of investment in a stock market. The`selection’means that for`stock index futures’, n companies of all ones in the market are selected. For index fund selections, Orito et al.(6) proposed a method consisting of the following two steps : Step 1 is to select N companies in the market with a heuristic rule based on the coefficient of determination between the return rate of each company in the market and the increasing rate of the stock price index. Step 2 is to construct a group of n companies by applying genetic algorithms to the set of N companies. We note that the rule of Step 1 is not unique. The accuracy of the results using their method depends on the length of time data (price data) in the experiments. The main purpose of this paper is to introduce a more`effective rule’for Step 1. The rule is based on turnover. The method consisting of Step 1 based on turnover and Step 2 is examined with numerical experiments for the 1st Section of Tokyo Stock Exchange. The results show that with our method, it is possible to construct the more effective index fund than the results of Orito et al.(6). The accuracy of the results using our method depends little on the length of time data (turnover data). The method especially works well when the increasing rate of the stock price index over a period can be viewed as a linear time series data.

  8. Determination of sulfonamide antibiotics and metabolites in liver, muscle and kidney samples by pressurized liquid extraction or ultrasound-assisted extraction followed by liquid chromatography-quadrupole linear ion trap-tandem mass spectrometry (HPLC-QqLIT-MS/MS).

    PubMed

    Hoff, Rodrigo Barcellos; Pizzolato, Tânia Mara; Peralba, Maria do Carmo Ruaro; Díaz-Cruz, M Silvia; Barceló, Damià

    2015-03-01

    Sulfonamides are widely used in human and veterinary medicine. The presence of sulfonamides residues in food is an issue of great concern. Throughout the present work, a method for the targeted analysis of 16 sulfonamides and metabolites residue in liver of several species has been developed and validated. Extraction and clean-up has been statistically optimized using central composite design experiments. Two extraction methods have been developed, validated and compared: i) pressurized liquid extraction, in which samples were defatted with hexane and subsequently extracted with acetonitrile and ii) ultrasound-assisted extraction with acetonitrile and further liquid-liquid extraction with hexane. Extracts have been analyzed by liquid chromatography-quadrupole linear ion trap-tandem mass spectrometry. Validation procedure has been based on the Commission Decision 2002/657/EC and included the assessment of parameters such as decision limit (CCα), detection capability (CCβ), sensitivity, selectivity, accuracy and precision. Method׳s performance has been satisfactory, with CCα values within the range of 111.2-161.4 µg kg(-1), limits of detection of 10 µg kg(-1) and accuracy values around 100% for all compounds. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Simultaneous determination of naringenin and hesperetin in rats after oral administration of Da-Cheng-Qi decoction by high-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Liu, Ying; Xu, Fengguo; Zhang, Zunjian; Song, Rui; Tian, Yuan

    2008-07-01

    To quantify naringenin and hesperetin in rat plasma after oral administration of Da-Cheng-Qi decoction, a famous purgative traditional Chinese medicine, a high-performance liquid chromatography-tandem mass spectrometry method was developed and validated. The HPLC separation was carried out on a Zorbax SB-C(18) column using 0.1% formic acid-methanol as mobile phase and estazolam as internal standard after the sample of rat plasma had been cleaned up with one-step protein precipitation using methanol. Atmospheric pressure chemical ionization in the positive ion mode and selected reaction monitoring method was developed to determine the active components. This method was validated in terms of recovery, linearity, accuracy and precision (intra- and inter-batch variation). The recoveries of naringenin and hesperetin were 72.8-76.6 and 75.7-77.2%, respectively. Linearity in rat plasma was observed over the range of 0.5-250 ng/mL (r2 > 0.99) for both naringenin and hesperetin. The accuracy and precision were well within the acceptable range and the relative standard deviation of the measured rat plasma samples was less than 15% (n = 5). The validated method was successfully applied for the evaluation of the pharmacokinetics of naringenin and hesperetin administered to six rats.

  10. Multivariate Predictors of Music Perception and Appraisal by Adult Cochlear Implant Users

    PubMed Central

    Gfeller, Kate; Oleson, Jacob; Knutson, John F.; Breheny, Patrick; Driscoll, Virginia; Olszewski, Carol

    2009-01-01

    The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music. PMID:18669126

  11. Validation and Uncertainty Estimation of an Ecofriendly and Stability-Indicating HPLC Method for Determination of Diltiazem in Pharmaceutical Preparations

    PubMed Central

    Sadeghi, Fahimeh; Navidpour, Latifeh; Bayat, Sima; Afshar, Minoo

    2013-01-01

    A green, simple, and stability-indicating RP-HPLC method was developed for the determination of diltiazem in topical preparations. The separation was based on a C18 analytical column using a mobile phase consisted of ethanol: phosphoric acid solution (pH = 2.5) (35 : 65, v/v). Column temperature was set at 50°C and quantitation was achieved with UV detection at 240 nm. In forced degradation studies, the drug was subjected to oxidation, hydrolysis, photolysis, and heat. The method was validated for specificity, selectivity, linearity, precision, accuracy, and robustness. The applied procedure was found to be linear in diltiazem concentration range of 0.5–50 μg/mL (r 2 = 0.9996). Precision was evaluated by replicate analysis in which % relative standard deviation (RSD) values for areas were found below 2.0. The recoveries obtained (99.25%–101.66%) ensured the accuracy of the developed method. The degradation products as well as the pharmaceutical excipients were well resolved from the pure drug. The expanded uncertainty (5.63%) of the method was also estimated from method validation data. Accordingly, the proposed validated and sustainable procedure was proved to be suitable for routine analyzing and stability studies of diltiazem in pharmaceutical preparations. PMID:24163778

  12. A ratiometric strategy -based electrochemical sensing interface for the sensitive and reliable detection of imidacloprid.

    PubMed

    Li, Xueyan; Kan, Xianwen

    2018-04-30

    In this study, a ratiometric strategy-based electrochemical sensor was developed by electropolymerization of thionine (THI) and β-cyclodextrin (β-CD) composite films on a glassy carbon electrode surface for imidacloprid (IMI) detection. THI played the role of an inner reference element to provide a built-in correction. In addition, the modified β-CD showed good selective enrichment for IMI to improve the sensitivity and anti-interference ability of the sensor. The current ratio between IMI and THI was calculated as the detected signal for IMI sensing. Compared with common single-signal sensing, the proposed ratiometric strategy showed a higher linear range and a lower limit of detection of 4.0 × 10-8-1.0 × 10-5 mol L-1 and 1.7 × 10-8 mol L-1, respectively, for IMI detection. On the other hand, the ratiometric strategy endowed the sensor with good accuracy, reproducibility, and stability. The sensor was also used for IMI determination in real samples with satisfactory results. The simple, effective, and reliable way reported in this study can be further used to prepare ratiometric strategy-based electrochemical sensors for the selective and sensitive detection of other compounds with good accuracy and stability.

  13. Decoding of visual activity patterns from fMRI responses using multivariate pattern analyses and convolutional neural network.

    PubMed

    Zafar, Raheel; Kamel, Nidal; Naufal, Mohamad; Malik, Aamir Saeed; Dass, Sarat C; Ahmad, Rana Fayyaz; Abdullah, Jafri M; Reza, Faruque

    2017-01-01

    Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modified CNN to decode the behavior of brain for different images with limited data set. Selection of significant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; significant features are selected using t-test. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to find the unknown parameters of every individual voxel and the classification is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).

  14. Simultaneous determination of multiclass preservatives including isothiazolinones and benzophenone-type UV filters in household and personal care products by micellar electrokinetic chromatography.

    PubMed

    Lopez-Gazpio, Josu; Garcia-Arrona, Rosa; Millán, Esmeralda

    2015-04-01

    In this work, a simple and reliable micellar electrokinetic chromatography method for the separation and quantification of 14 preservatives, including isothiazolinones, and two benzophenone-type UV filters in household, cosmetic and personal care products was developed. The selected priority compounds are widely used as ingredients in many personal care products, and are included in the European Regulation concerning cosmetic products. The electrophoretic separation parameters were optimized by means of a modified chromatographic response function in combination with an experimental design, namely a central composite design. After optimization of experimental conditions, the BGE selected for the separation of the targets consisted of 60 mM SDS, 18 mM sodium tetraborate, pH 9.4 and 10% v/v methanol. The MEKC method was checked in terms of linearity, LODs and quantification, repeatability, intermediate precision, and accuracy, providing appropriate values (i.e. R(2) ≥ 0.992, repeatability RSD values ˂9%, and accuracy 90-115%). Applicability of the validated method was successfully assessed by quantifying preservatives and UV filters in commercial consumer products. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  17. Effective connectivity between superior temporal gyrus and Heschl's gyrus during white noise listening: linear versus non-linear models.

    PubMed

    Hamid, Ka; Yusoff, An; Rahman, Mza; Mohamad, M; Hamid, Aia

    2012-04-01

    This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026; Ten healthy male participants were required to listen to white noise stimuli during functional magnetic resonance imaging (fMRI) scans. Statistical parametric mapping (SPM) was used to generate individual and group brain activation maps. For input region determination, two intrinsic connectivity models comprising bilateral HG and STG were constructed using dynamic causal modelling (DCM). The models were estimated and inferred using DCM while Bayesian Model Selection (BMS) for group studies was used for model comparison and selection. Based on the winning model, six linear and six non-linear causal models were derived and were again estimated, inferred, and compared to obtain a model that best represents the effective connectivity between HG and the STG, balancing accuracy and complexity. Group results indicated significant asymmetrical activation (p(uncorr) < 0.001) in bilateral HG and STG. Model comparison results showed strong evidence of STG as the input centre. The winning model is preferred by 6 out of 10 participants. The results were supported by BMS results for group studies with the expected posterior probability, r = 0.7830 and exceedance probability, ϕ = 0.9823. One-sample t-tests performed on connection values obtained from the winning model indicated that the valid connections for the winning model are the unidirectional parallel connections from STG to bilateral HG (p < 0.05). Subsequent model comparison between linear and non-linear models using BMS prefers non-linear connection (r = 0.9160, ϕ = 1.000) from which the connectivity between STG and the ipsi- and contralateral HG is gated by the activity in STG itself. We are able to demonstrate that the effective connectivity between HG and STG while listening to white noise for the respective participants can be explained by a non-linear dynamic causal model with the activity in STG influencing the STG-HG connectivity non-linearly.

  18. A novel approach for quantification and analysis of the color Doppler twinkling artifact with application in noninvasive surface roughness characterization: an in vitro phantom study.

    PubMed

    Jamzad, Amoon; Setarehdan, Seyed Kamaledin

    2014-04-01

    The twinkling artifact is an undesired phenomenon within color Doppler sonograms that usually appears at the site of internal calcifications. Since the appearance of the twinkling artifact is correlated with the roughness of the calculi, noninvasive roughness estimation of the internal stones may be considered as a potential twinkling artifact application. This article proposes a novel quantitative approach for measurement and analysis of twinkling artifact data for roughness estimation. A phantom was developed with 7 quantified levels of roughness. The Doppler system was initially calibrated by the proposed procedure to facilitate the analysis. A total of 1050 twinkling artifact images were acquired from the phantom, and 32 novel numerical measures were introduced and computed for each image. The measures were then ranked on the basis of roughness quantification ability using different methods. The performance of the proposed twinkling artifact-based surface roughness quantification method was finally investigated for different combinations of features and classifiers. Eleven features were shown to be the most efficient numerical twinkling artifact measures in roughness characterization. The linear classifier outperformed other methods for twinkling artifact classification. The pixel count measures produced better results among the other categories. The sequential selection method showed higher accuracy than other individual rankings. The best roughness recognition average accuracy of 98.33% was obtained by the first 5 principle components and the linear classifier. The proposed twinkling artifact analysis method could recognize the phantom surface roughness with average accuracy of 98.33%. This method may also be applicable for noninvasive calculi characterization in treatment management.

  19. Genome-based prediction of test cross performance in two subsequent breeding cycles.

    PubMed

    Hofheinz, Nina; Borchardt, Dietrich; Weissleder, Knuth; Frisch, Matthias

    2012-12-01

    Genome-based prediction of genetic values is expected to overcome shortcomings that limit the application of QTL mapping and marker-assisted selection in plant breeding. Our goal was to study the genome-based prediction of test cross performance with genetic effects that were estimated using genotypes from the preceding breeding cycle. In particular, our objectives were to employ a ridge regression approach that approximates best linear unbiased prediction of genetic effects, compare cross validation with validation using genetic material of the subsequent breeding cycle, and investigate the prospects of genome-based prediction in sugar beet breeding. We focused on the traits sugar content and standard molasses loss (ML) and used a set of 310 sugar beet lines to estimate genetic effects at 384 SNP markers. In cross validation, correlations >0.8 between observed and predicted test cross performance were observed for both traits. However, in validation with 56 lines from the next breeding cycle, a correlation of 0.8 could only be observed for sugar content, for standard ML the correlation reduced to 0.4. We found that ridge regression based on preliminary estimates of the heritability provided a very good approximation of best linear unbiased prediction and was not accompanied with a loss in prediction accuracy. We conclude that prediction accuracy assessed with cross validation within one cycle of a breeding program can not be used as an indicator for the accuracy of predicting lines of the next cycle. Prediction of lines of the next cycle seems promising for traits with high heritabilities.

  20. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.

  1. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  2. Validation of reliable and selective methods for direct determination of glyphosate and aminomethylphosphonic acid in milk and urine using LC-MS/MS.

    PubMed

    Jensen, Pamela K; Wujcik, Chad E; McGuire, Michelle K; McGuire, Mark A

    2016-01-01

    Simple high-throughput procedures were developed for the direct analysis of glyphosate [N-(phosphonomethyl)glycine] and aminomethylphosphonic acid (AMPA) in human and bovine milk and human urine matrices. Samples were extracted with an acidified aqueous solution on a high-speed shaker. Stable isotope labeled internal standards were added with the extraction solvent to ensure accurate tracking and quantitation. An additional cleanup procedure using partitioning with methylene chloride was required for milk matrices to minimize the presence of matrix components that can impact the longevity of the analytical column. Both analytes were analyzed directly, without derivatization, by liquid chromatography tandem mass spectrometry using two separate precursor-to-product transitions that ensure and confirm the accuracy of the measured results. Method performance was evaluated during validation through a series of assessments that included linearity, accuracy, precision, selectivity, ionization effects and carryover. Limits of quantitation (LOQ) were determined to be 0.1 and 10 µg/L (ppb) for urine and milk, respectively, for both glyphosate and AMPA. Mean recoveries for all matrices were within 89-107% at three separate fortification levels including the LOQ. Precision for replicates was ≤ 7.4% relative standard deviation (RSD) for milk and ≤ 11.4% RSD for urine across all fortification levels. All human and bovine milk samples used for selectivity and ionization effects assessments were free of any detectable levels of glyphosate and AMPA. Some of the human urine samples contained trace levels of glyphosate and AMPA, which were background subtracted for accuracy assessments. Ionization effects testing showed no significant biases from the matrix. A successful independent external validation was conducted using the more complicated milk matrices to demonstrate method transferability.

  3. Validation of an assay for quantification of free normetanephrine, metanephrine and methoxytyramine in plasma by high performance liquid chromatography with coulometric detection: Comparison of peak-area vs. peak-height measurements.

    PubMed

    Nieć, Dawid; Kunicki, Paweł K

    2015-10-01

    Measurements of plasma concentrations of free normetanephrine (NMN), metanephrine (MN) and methoxytyramine (MTY) constitute the most diagnostically accurate screening test for pheochromocytomas and paragangliomas. The aim of this article is to present the results from a validation of an analytical method utilizing high performance liquid chromatography with coulometric detection (HPLC-CD) for quantifying plasma free NMN, MN and MTY. Additionally, peak integration by height and area and the use of one calibration curve for all batches or individual calibration curve for each batch of samples was explored as to determine the optimal approach with regard to accuracy and precision. The method was validated using charcoal stripped plasma spiked with solutions of NMN, MN, MTY and internal standard (4-hydroxy-3-methoxybenzylamine) with the exception of selectivity which was evaluated by analysis of real plasma samples. Calibration curve performance, accuracy, precision and recovery were determined following both peak-area and peak-height measurements and the obtained results were compared. The most accurate and precise method of calibration was evaluated by analyzing quality control samples at three concentration levels in 30 analytical runs. The detector response was linear over the entire tested concentration range from 10 to 2000pg/mL with R(2)≥0.9988. The LLOQ was 10pg/mL for each analyte of interest. To improve accuracy for measurements at low concentrations, a weighted (1/amount) linear regression model was employed, which resulted in inaccuracies of -2.48 to 9.78% and 0.22 to 7.81% following peak-area and peak-height integration, respectively. The imprecisions ranged from 1.07 to 15.45% and from 0.70 to 11.65% for peak-area and peak-height measurements, respectively. The optimal approach to calibration was the one utilizing an individual calibration curve for each batch of samples and peak-height measurements. It was characterized by inaccuracies ranging from -3.39 to +3.27% and imprecisions from 2.17 to 13.57%. The established HPLC-CD method enables accurate and precise measurements of plasma free NMN, MN and MTY with reasonable selectivity. Preparing calibration curve based on peak-height measurements for each batch of samples yields optimal accuracy and precision. Copyright © 2015. Published by Elsevier B.V.

  4. Analysis and application of ERTS-1 data for regional geological mapping

    NASA Technical Reports Server (NTRS)

    Gold, D. P.; Parizek, R. R.; Alexander, S. A.

    1973-01-01

    Combined visual and digital techniques of analysing ERTS-1 data for geologic information have been tried on selected areas in Pennsylvania. The major physiolographic and structural provinces show up well. Supervised mapping, following the imaged expression of known geologic features on ERTS band 5 enlargements (1:250,000) of parts of eastern Pennsylvania, delimited the Diabase Sills and the Precambrian rocks of the Reading Prong with remarkable accuracy. From unsupervised mapping, transgressive linear features are apparent in unexpected density, and exhibit strong control over river valley and stream channel directions. They are unaffected by bedrock type, age, or primary structural boundaries, which suggests they are either rejuvenated basement joint directions on different scales, or they are a recently impressed structure possibly associated with a drifting North American plate. With ground mapping and underflight data, 6 scales of linear features have been recognized.

  5. Automated discrimination of dementia spectrum disorders using extreme learning machine and structural T1 MRI features.

    PubMed

    Jongin Kim; Boreom Lee

    2017-07-01

    The classification of neuroimaging data for the diagnosis of Alzheimer's Disease (AD) is one of the main research goals of the neuroscience and clinical fields. In this study, we performed extreme learning machine (ELM) classifier to discriminate the AD, mild cognitive impairment (MCI) from normal control (NC). We compared the performance of ELM with that of a linear kernel support vector machine (SVM) for 718 structural MRI images from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The data consisted of normal control, MCI converter (MCI-C), MCI non-converter (MCI-NC), and AD. We employed SVM-based recursive feature elimination (RFE-SVM) algorithm to find the optimal subset of features. In this study, we found that the RFE-SVM feature selection approach in combination with ELM shows the superior classification accuracy to that of linear kernel SVM for structural T1 MRI data.

  6. [Discriminant analysis to predict the clinical diagnosis of primary immunodeficiencies: a preliminary report].

    PubMed

    Murata, Chiharu; Ramírez, Ana Belén; Ramírez, Guadalupe; Cruz, Alonso; Morales, José Luis; Lugo-Reyes, Saul Oswaldo

    2015-01-01

    The features in a clinical history from a patient with suspected primary immunodeficiency (PID) direct the differential diagnosis through pattern recognition. PIDs are a heterogeneous group of more than 250 congenital diseases with increased susceptibility to infection, inflammation, autoimmunity, allergy and malignancy. Linear discriminant analysis (LDA) is a multivariate supervised classification method to sort objects of study into groups by finding linear combinations of a number of variables. To identify the features that best explain membership of pediatric PID patients to a group of defect or disease. An analytic cross-sectional study was done with a pre-existing database with clinical and laboratory records from 168 patients with PID, followed at the National Institute of Pediatrics during 1991-2012, it was used to build linear discriminant models that would explain membership of each patient to the different group defects and to the most prevalent PIDs in our registry. After a preliminary run only 30 features were included (4 demographic, 10 clinical, 10 laboratory, 6 germs), with which the training models were developed through a stepwise regression algorithm. We compared the automatic feature selection with a selection made by a human expert, and then assessed the diagnostic usefulness of the resulting models (sensitivity, specificity, prediction accuracy and kappa coefficient), with 95% confidence intervals. The models incorporated 6 to 14 features to explain membership of PID patients to the five most abundant defect groups (combined, antibody, well-defined, dysregulation and phagocytosis), and to the four most prevalent PID diseases (X-linked agammaglobulinemia, chronic granulomatous disease, common variable immunodeficiency and ataxiatelangiectasia). In practically all cases of feature selection the machine outperformed the human expert. Diagnosis prediction using the equations created had a global accuracy of 83 to 94%, with sensitivity of 60 to 100%, specificity of 83 to 95% and kappa coefficient of 0.37 to 0.76. In general, the selection of features has clinical plausibility, and the practical advantage of utilizing only clinical attributes, infecting germs and routine lab results (blood cell counts and serum immunoglobulins). The performance of the model as a diagnostic tool was acceptable. The study's main limitations are a limited sample size and a lack of cross validation. This is only the first step in the construction of a machine learning system, with a wider approach that includes a larger database and different methodologies, to assist the clinical diagnosis of primary immunodeficiencies.

  7. Estimating orientation using magnetic and inertial sensors and different sensor fusion approaches: accuracy assessment in manual and locomotion tasks.

    PubMed

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-10-09

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided.

  8. SVM Based Descriptor Selection and Classification of Neurodegenerative Disease Drugs for Pharmacological Modeling.

    PubMed

    Shahid, Mohammad; Shahzad Cheema, Muhammad; Klenner, Alexander; Younesi, Erfan; Hofmann-Apitius, Martin

    2013-03-01

    Systems pharmacological modeling of drug mode of action for the next generation of multitarget drugs may open new routes for drug design and discovery. Computational methods are widely used in this context amongst which support vector machines (SVM) have proven successful in addressing the challenge of classifying drugs with similar features. We have applied a variety of such SVM-based approaches, namely SVM-based recursive feature elimination (SVM-RFE). We use the approach to predict the pharmacological properties of drugs widely used against complex neurodegenerative disorders (NDD) and to build an in-silico computational model for the binary classification of NDD drugs from other drugs. Application of an SVM-RFE model to a set of drugs successfully classified NDD drugs from non-NDD drugs and resulted in overall accuracy of ∼80 % with 10 fold cross validation using 40 top ranked molecular descriptors selected out of total 314 descriptors. Moreover, SVM-RFE method outperformed linear discriminant analysis (LDA) based feature selection and classification. The model reduced the multidimensional descriptors space of drugs dramatically and predicted NDD drugs with high accuracy, while avoiding over fitting. Based on these results, NDD-specific focused libraries of drug-like compounds can be designed and existing NDD-specific drugs can be characterized by a well-characterized set of molecular descriptors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Scoring and staging systems using cox linear regression modeling and recursive partitioning.

    PubMed

    Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H

    2006-01-01

    Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.

  10. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement.

    PubMed

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-30

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of -20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system.

  11. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-01

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of −20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system. PMID:26840316

  12. Simultaneous selection for cowpea (Vigna unguiculata L.) genotypes with adaptability and yield stability using mixed models.

    PubMed

    Torres, F E; Teodoro, P E; Rodrigues, E V; Santos, A; Corrêa, A M; Ceccon, G

    2016-04-29

    The aim of this study was to select erect cowpea (Vigna unguiculata L.) genotypes simultaneously for high adaptability, stability, and yield grain in Mato Grosso do Sul, Brazil using mixed models. We conducted six trials of different cowpea genotypes in 2005 and 2006 in Aquidauana, Chapadão do Sul, Dourados, and Primavera do Leste. The experimental design was randomized complete blocks with four replications and 20 genotypes. Genetic parameters were estimated by restricted maximum likelihood/best linear unbiased prediction, and selection was based on the harmonic mean of the relative performance of genetic values method using three strategies: selection based on the predicted breeding value, having considered the performance mean of the genotypes in all environments (no interaction effect); the performance in each environment (with an interaction effect); and the simultaneous selection for grain yield, stability, and adaptability. The MNC99542F-5 and MNC99-537F-4 genotypes could be grown in various environments, as they exhibited high grain yield, adaptability, and stability. The average heritability of the genotypes was moderate to high and the selective accuracy was 82%, indicating an excellent potential for selection.

  13. Automated training site selection for large-area remote-sensing image analysis

    NASA Astrophysics Data System (ADS)

    McCaffrey, Thomas M.; Franklin, Steven E.

    1993-11-01

    A computer program is presented to select training sites automatically from remotely sensed digital imagery. The basic ideas are to guide the image analyst through the process of selecting typical and representative areas for large-area image classifications by minimizing bias, and to provide an initial list of potential classes for which training sites are required to develop a classification scheme or to verify classification accuracy. Reducing subjectivity in training site selection is achieved by using a purely statistical selection of homogeneous sites which then can be compared to field knowledge, aerial photography, or other remote-sensing imagery and ancillary data to arrive at a final selection of sites to be used to train the classification decision rules. The selection of the homogeneous sites uses simple tests based on the coefficient of variance, the F-statistic, and the Student's i-statistic. Comparisons of site means are conducted with a linear growing list of previously located homogeneous pixels. The program supports a common pixel-interleaved digital image format and has been tested on aerial and satellite optical imagery. The program is coded efficiently in the C programming language and was developed under AIX-Unix on an IBM RISC 6000 24-bit color workstation.

  14. Relationship Between Motor Variability, Accuracy, and Ball Speed in the Tennis Serve

    PubMed Central

    Antúnez, Ruperto Menayo; Hernández, Francisco Javier Moreno; García, Juan Pedro Fuentes; Vaíllo, Raúl Reina; Arroyo, Jesús Sebastián Damas

    2012-01-01

    The main objective of this study was to analyze the motor variability in the performance of the tennis serve and its relationship to performance outcome. Seventeen male tennis players took part in the research, and they performed 20 serves. Linear and non-linear variability during the hand movement was measured by 3D Motion Tracking. Ball speed was recorded with a sports radar gun and the ball bounces were video recorded to calculate accuracy. The results showed a relationship between the amount of variability and its non-linear structure found in performance of movement and the outcome of the serve. The study also found that movement predictability correlates with performance. An increase in the amount of movement variability could affect the tennis serve performance in a negative way by reducing speed and accuracy of the ball. PMID:23486998

  15. Enantioselective determination of 3-n-butylphthalide (NBP) in human plasma by liquid chromatography on a teicoplanin-based chiral column coupled with tandem mass spectrometry.

    PubMed

    Diao, Xingxing; Ma, Zhiyu; Lei, Peng; Zhong, Dafang; Zhang, Yifan; Chen, Xiaoyan

    2013-11-15

    A novel and sensitive liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed and validated to determine the exposure of 3-n-butylphthalide (NBP) enantiomers in human plasma. The NBP enantiomers were extracted from human plasma using methyl tert-butyl ether. The baseline separation of R-(+)-NBP and S-(-)-NBP was achieved within 11.0min using a teicoplanin-based Astec Chirobiotic T column (250mm×4.6mm i.d., 5μm) under isocratic conditions at a flow rate of 0.6mL/min. The selection of the chiral stationary phase and the effect of the mobile phase composition on the resolution of the enantiomers were discussed. The selectivity, linearity, precision, accuracy, matrix effect, recovery, and stability were evaluated under optimized conditions. The LC-MS/MS method using 200μL of human plasma was linear over the concentration range of 5.00-400ng/mL for each enantiomer. The lower limit of quantification (LLOQ) for both enantiomers was 5.00ng/mL. The intra- and inter-assay precision values of the replicated quality control samples were within 8.0% for each enantiomer. The mean accuracy values for the quality control samples were within ±6.1% of the nominal values for R-(+)-NBP and S-(-)-NBP. No chiral inversion was observed during sample storage, preparation, and analysis. The method proved suitable for enantioselective pharmacokinetic studies of NBP after an oral administration of a therapeutic dose of racemic NBP. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Combining simplicity with cost-effectiveness: Investigation of potential counterfeit of proton pump inhibitors through simulated formulations using thin-layer chromatography.

    PubMed

    Bhatt, Nejal M; Chavada, Vijay D; Sanyal, Mallika; Shrivastav, Pranav S

    2016-11-18

    A simple, accurate and precise high-performance thin-layer chromatographic method has been developed and validated for the analysis of proton pump inhibitors (PPIs) and their co-formulated drugs, available as binary combination. Planar chromatographic separation was achieved using a single mobile phase comprising of toluene: iso-propranol: acetone: ammonia 5.0:2.3:2.5:0.2 (v/v/v/v) for the analysis of 14 analytes on aluminium-backed layer of silica gel 60 FG 254 . Densitometric determination of the separated spots was done at 290nm. The method was validated according to ICH guidelines for linearity, precision and accuracy, sensitivity, specificity and robustness. The method showed good linear response for the selected drugs as indicated by the high values of correlation coefficients (≥0.9993). The limit of detection and limit of quantiation were in the range of 6.9-159.2ng/band and 20.8-478.1ng/band respectively for all the analytes. The optimized conditions afforded adequate resolution of each PPI from their co-formulated drugs and provided unambiguous identification of the co-formulated drugs from their homologous retardation factors (hR f ). The only limitation of the method was the inability to separate two PPIs, rabeprazole and lansoprazole from each other. Nevertheless, it is proposed that peak spectra recording and comparison with standard drug spot can be a viable option for assignment of TLC spots. The method performance was assessed by analyzing different laboratory simulated mixtures and some marketed formulations of the selected drugs. The developed method was successfully used to investigate potential counterfeit of PPIs through a series of simulated formulations with good accuracy and precision. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. DNA sequencing up to 1300 bases in two hours by capillary electrophoresis with mixed replaceable linear polyacrylamide solutions.

    PubMed

    Zhou, H; Miller, A W; Sosic, Z; Buchholz, B; Barron, A E; Kotler, L; Karger, B L

    2000-03-01

    This paper presents results on ultralong read DNA sequencing with relatively short separation times using capillary electrophoresis with replaceable polymer matrixes. In previous work, the effectiveness of mixed replaceable solutions of linear polyacrylamide (LPA) was demonstrated, and 1000 bases were routinely obtained in less than 1 h. Substantially longer read lengths have now been achieved by a combination of improved formulation of LPA mixtures, optimization of temperature and electric field, adjustment of the sequencing reaction, and refinement of the base-caller. The average molar masses of LPA used as DNA separation matrixes were measured by gel permeation chromatography and multiangle laser light scattering. Newly formulated matrixes comprising 0.5% (w/w) 270 kDa and 2% (w/w) 10 or 17 MDa LPA raised the optimum column temperature from 60 to 70 degrees C, increasing the selectivity for large DNA fragments, while maintaining high selectivity for small fragments as well. This improved resolution was further enhanced by reducing the electric field strength from 200 to 125 V/cm. In addition, because sequencing accuracy beyond 1000 bases was diminished by the low signal from G-terminated fragments when the standard reaction protocol for a commercial dye primer kit was used, the amount of these fragments was doubled. Augmenting the base-calling expert system with rules specific for low peak resolution also had a significant effect, contributing slightly less than half of the total increase in read length. With full optimization, this read length reached up to 1300 bases (average 1250) with 98.5% accuracy in 2 h for a single-stranded M13 template.

  18. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    PubMed Central

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-01-01

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels. PMID:25372618

  19. An SVM-based classifier for estimating the state of various rotating components in agro-industrial machinery with a vibration signal acquired from a single point on the machine chassis.

    PubMed

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-11-03

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels.

  20. Parametric and Nonparametric Statistical Methods for Genomic Selection of Traits with Additive and Epistatic Genetic Architectures

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2014-01-01

    Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289

  1. Determination of Bosentan in Pharmaceutical Preparations by Linear Sweep, Square Wave and Differential Pulse Voltammetry Methods

    PubMed Central

    Atila, Alptug; Yilmaz, Bilal

    2015-01-01

    In this study, simple, fast and reliable cyclic voltammetry (CV), linear sweep voltammetry (LSV), square wave voltammetry (SWV) and differential pulse voltammetry (DPV) methods were developed and validated for determination of bosentan in pharmaceutical preparations. The proposed methods were based on electrochemical oxidation of bosentan at platinum electrode in acetonitrile solution containing 0.1 M TBACIO4. The well-defined oxidation peak was observed at 1.21 V. The calibration curves were linear for bosentan at the concentration range of 5-40 µg/mL for LSV and 5-35 µg/mL for SWV and DPV methods, respectively. Intra- and inter-day precision values for bosentan were less than 4.92, and accuracy (relative error) was better than 6.29%. The mean recovery of bosentan was 100.7% for pharmaceutical preparations. No interference was found from two tablet excipients at the selected assay conditions. Developed methods in this study are accurate, precise and can be easily applied to Tracleer and Diamond tablets as pharmaceutical preparation. PMID:25901151

  2. Determination of bosentan in pharmaceutical preparations by linear sweep, square wave and differential pulse voltammetry methods.

    PubMed

    Atila, Alptug; Yilmaz, Bilal

    2015-01-01

    In this study, simple, fast and reliable cyclic voltammetry (CV), linear sweep voltammetry (LSV), square wave voltammetry (SWV) and differential pulse voltammetry (DPV) methods were developed and validated for determination of bosentan in pharmaceutical preparations. The proposed methods were based on electrochemical oxidation of bosentan at platinum electrode in acetonitrile solution containing 0.1 M TBACIO4. The well-defined oxidation peak was observed at 1.21 V. The calibration curves were linear for bosentan at the concentration range of 5-40 µg/mL for LSV and 5-35 µg/mL for SWV and DPV methods, respectively. Intra- and inter-day precision values for bosentan were less than 4.92, and accuracy (relative error) was better than 6.29%. The mean recovery of bosentan was 100.7% for pharmaceutical preparations. No interference was found from two tablet excipients at the selected assay conditions. Developed methods in this study are accurate, precise and can be easily applied to Tracleer and Diamond tablets as pharmaceutical preparation.

  3. Analysis of the load selection on the error of source characteristics identification for an engine exhaust system

    NASA Astrophysics Data System (ADS)

    Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin

    2015-05-01

    Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.

  4. Bayesian feature selection for high-dimensional linear regression via the Ising approximation with applications to genomics.

    PubMed

    Fisher, Charles K; Mehta, Pankaj

    2015-06-01

    Feature selection, identifying a subset of variables that are relevant for predicting a response, is an important and challenging component of many methods in statistics and machine learning. Feature selection is especially difficult and computationally intensive when the number of variables approaches or exceeds the number of samples, as is often the case for many genomic datasets. Here, we introduce a new approach--the Bayesian Ising Approximation (BIA)-to rapidly calculate posterior probabilities for feature relevance in L2 penalized linear regression. In the regime where the regression problem is strongly regularized by the prior, we show that computing the marginal posterior probabilities for features is equivalent to computing the magnetizations of an Ising model with weak couplings. Using a mean field approximation, we show it is possible to rapidly compute the feature selection path described by the posterior probabilities as a function of the L2 penalty. We present simulations and analytical results illustrating the accuracy of the BIA on some simple regression problems. Finally, we demonstrate the applicability of the BIA to high-dimensional regression by analyzing a gene expression dataset with nearly 30 000 features. These results also highlight the impact of correlations between features on Bayesian feature selection. An implementation of the BIA in C++, along with data for reproducing our gene expression analyses, are freely available at http://physics.bu.edu/∼pankajm/BIACode. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Genomic selection and association mapping in rice (Oryza sativa): effect of trait genetic architecture, training population composition, marker number and statistical model on accuracy of rice genomic selection in elite, tropical rice breeding lines.

    PubMed

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R

    2015-02-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline.

  6. Genomic Selection and Association Mapping in Rice (Oryza sativa): Effect of Trait Genetic Architecture, Training Population Composition, Marker Number and Statistical Model on Accuracy of Rice Genomic Selection in Elite, Tropical Rice Breeding Lines

    PubMed Central

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R.

    2015-01-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline. PMID:25689273

  7. Mixed models for selection of Jatropha progenies with high adaptability and yield stability in Brazilian regions.

    PubMed

    Teodoro, P E; Bhering, L L; Costa, R D; Rocha, R B; Laviola, B G

    2016-08-19

    The aim of this study was to estimate genetic parameters via mixed models and simultaneously to select Jatropha progenies grown in three regions of Brazil that meet high adaptability and stability. From a previous phenotypic selection, three progeny tests were installed in 2008 in the municipalities of Planaltina-DF (Midwest), Nova Porteirinha-MG (Southeast), and Pelotas-RS (South). We evaluated 18 families of half-sib in a randomized block design with three replications. Genetic parameters were estimated using restricted maximum likelihood/best linear unbiased prediction. Selection was based on the harmonic mean of the relative performance of genetic values method in three strategies considering: 1) performance in each environment (with interaction effect); 2) performance in each environment (with interaction effect); and 3) simultaneous selection for grain yield, stability and adaptability. Accuracy obtained (91%) reveals excellent experimental quality and consequently safety and credibility in the selection of superior progenies for grain yield. The gain with the selection of the best five progenies was more than 20%, regardless of the selection strategy. Thus, based on the three selection strategies used in this study, the progenies 4, 11, and 3 (selected in all environments and the mean environment and by adaptability and phenotypic stability methods) are the most suitable for growing in the three regions evaluated.

  8. Accuracy and reliability of stitched cone-beam computed tomography images

    PubMed Central

    Egbert, Nicholas; Cagna, David R.; Wicks, Russell A.

    2015-01-01

    Purpose This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Materials and Methods Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. Results The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. Conclusion The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets. PMID:25793182

  9. 40 CFR 63.8 - Monitoring requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...) Data recording, calculations, and reporting; (v) Accuracy audit procedures, including sampling and...

  10. A Technique of Treating Negative Weights in WENO Schemes

    NASA Technical Reports Server (NTRS)

    Shi, Jing; Hu, Changqing; Shu, Chi-Wang

    2000-01-01

    High order accurate weighted essentially non-oscillatory (WENO) schemes have recently been developed for finite difference and finite volume methods both in structural and in unstructured meshes. A key idea in WENO scheme is a linear combination of lower order fluxes or reconstructions to obtain a high order approximation. The combination coefficients, also called linear weights, are determined by local geometry of the mesh and order of accuracy and may become negative. WENO procedures cannot be applied directly to obtain a stable scheme if negative linear weights are present. Previous strategy for handling this difficulty is by either regrouping of stencils or reducing the order of accuracy to get rid of the negative linear weights. In this paper we present a simple and effective technique for handling negative linear weights without a need to get rid of them.

  11. Application of tripolar concentric electrodes and prefeature selection algorithm for brain-computer interface.

    PubMed

    Besio, Walter G; Cao, Hongbao; Zhou, Peng

    2008-04-01

    For persons with severe disabilities, a brain-computer interface (BCI) may be a viable means of communication. Lapalacian electroencephalogram (EEG) has been shown to improve classification in EEG recognition. In this work, the effectiveness of signals from tripolar concentric electrodes and disc electrodes were compared for use as a BCI. Two sets of left/right hand motor imagery EEG signals were acquired. An autoregressive (AR) model was developed for feature extraction with a Mahalanobis distance based linear classifier for classification. An exhaust selection algorithm was employed to analyze three factors before feature extraction. The factors analyzed were 1) length of data in each trial to be used, 2) start position of data, and 3) the order of the AR model. The results showed that tripolar concentric electrodes generated significantly higher classification accuracy than disc electrodes.

  12. The theoretical accuracy of Runge-Kutta time discretizations for the initial boundary value problem: A careful study of the boundary error

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun

    1993-01-01

    The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.

  13. Sonography of the chest using linear-array versus sector transducers: Correlation with auscultation, chest radiography, and computed tomography.

    PubMed

    Tasci, Ozlem; Hatipoglu, Osman Nuri; Cagli, Bekir; Ermis, Veli

    2016-07-08

    The primary purpose of our study was to compare the efficacies of two sonographic (US) probes, a high-frequency linear-array probe and a lower-frequency phased-array sector probe in the diagnosis of basic thoracic pathologies. The secondary purpose was to compare the diagnostic performance of thoracic US with auscultation and chest radiography (CXR) using thoracic CT as a gold standard. In total, 55 consecutive patients scheduled for thoracic CT were enrolled in this prospective study. Four pathologic entities were evaluated: pneumothorax, pleural effusion, consolidation, and interstitial syndrome. A portable US scanner was used with a 5-10-MHz linear-array probe and a 1-5-MHz phased-array sector probe. The first probe used was chosen randomly. US, CXR, and auscultation results were compared with the CT results. The linear-array probe had the highest performance in the identification of pneumothorax (83% sensitivity, 100% specificity, and 99% diagnostic accuracy) and pleural effusion (100% sensitivity, 97% specificity, and 98% diagnostic accuracy); the sector probe had the highest performance in the identification of consolidation (89% sensitivity, 100% specificity, and 95% diagnostic accuracy) and interstitial syndrome (94% sensitivity, 93% specificity, and 94% diagnostic accuracy). For all pathologies, the performance of US was superior to those of CXR and auscultation. The linear probe is superior to the sector probe for identifying pleural pathologies, whereas the sector probe is superior to the linear probe for identifying parenchymal pathologies. Thoracic US has better diagnostic performance than CXR and auscultation for the diagnosis of common pathologic conditions of the chest. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 44:383-389, 2016. © 2016 Wiley Periodicals, Inc.

  14. Determination of tigecycline in turkey plasma by LC-MS/MS: validation and application in a pharmacokinetic study.

    PubMed

    Jasiecka-Mikołajczyk, A; Jaroszewski, J J

    2017-03-01

    Tigecycline (TIG), a novel glycylcycline antibiotic, plays an important role in the management of complicated skin and intra-abdominal infections. The available data lack any description of a method for determination of TIG in avian plasma. In our study, a selective, accurate and reversed-phase high performance liquid chromatography-tandem mass spectrometry method was developed for the determination of TIG in turkey plasma. Sample preparation was based on protein precipitation and liquid-liquid extraction using 1,2-dichloroethane. Chromatographic separation of TIG and minocycline (internal standard, IS) was achieved on an Atlantis T3 column (150 mm × 3.0 mm, 3.0 μm) using gradient elution. The selected reaction monitoring transitions were performed at 293.60 m/z → 257.10 m/z for TIG and 458.00 m/z → 441.20 m/z for IS. The developed method was validated in terms of specificity, selectivity, linearity, lowest limit of quantification, limit of detection, precision, accuracy, matrix effect, carry-over effect, extraction recovery and stability. All parameters of the method submitted to validation met the acceptance criteria. The assay was linear over the concentration range of 0.01-100 μg/ml. This validated method was successfully applied to a TIG pharmacokinetic study in turkey after intravenous and oral administration at a dose of 10 mg/kg at various time-points.

  15. Repeated significance tests of linear combinations of sensitivity and specificity of a diagnostic biomarker

    PubMed Central

    Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi

    2016-01-01

    A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768

  16. Using a generalized linear mixed model approach to explore the role of age, motor proficiency, and cognitive styles in children's reach estimation accuracy.

    PubMed

    Caçola, Priscila M; Pant, Mohan D

    2014-10-01

    The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.

  17. Linear estimation of coherent structures in wall-bounded turbulence at Re τ = 2000

    NASA Astrophysics Data System (ADS)

    Oehler, S.; Garcia–Gutiérrez, A.; Illingworth, S.

    2018-04-01

    The estimation problem for a fully-developed turbulent channel flow at Re τ = 2000 is considered. Specifically, a Kalman filter is designed using a Navier–Stokes-based linear model. The estimator uses time-resolved velocity measurements at a single wall-normal location (provided by DNS) to estimate the time-resolved velocity field at other wall-normal locations. The estimator is able to reproduce the largest scales with reasonable accuracy for a range of wavenumber pairs, measurement locations and estimation locations. Importantly, the linear model is also able to predict with reasonable accuracy the performance that will be achieved by the estimator when applied to the DNS. A more practical estimation scheme using the shear stress at the wall as measurement is also considered. The estimator is still able to estimate the largest scales with reasonable accuracy, although the estimator’s performance is reduced.

  18. Adaptive Modeling Procedure Selection by Data Perturbation.

    PubMed

    Zhang, Yongli; Shen, Xiaotong

    2015-10-01

    Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.

  19. Probe-level linear model fitting and mixture modeling results in high accuracy detection of differential gene expression.

    PubMed

    Lemieux, Sébastien

    2006-08-25

    The identification of differentially expressed genes (DEGs) from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model) method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.

  20. Linear signal noise summer accurately determines and controls S/N ratio

    NASA Technical Reports Server (NTRS)

    Sundry, J. L.

    1966-01-01

    Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.

  1. Students' Accuracy of Measurement Estimation: Context, Units, and Logical Thinking

    ERIC Educational Resources Information Center

    Jones, M. Gail; Gardner, Grant E.; Taylor, Amy R.; Forrester, Jennifer H.; Andre, Thomas

    2012-01-01

    This study examined students' accuracy of measurement estimation for linear distances, different units of measure, task context, and the relationship between accuracy estimation and logical thinking. Middle school students completed a series of tasks that included estimating the length of various objects in different contexts and completed a test…

  2. A comparative study between three stability indicating spectrophotometric methods for the determination of diatrizoate sodium in presence of its cytotoxic degradation product based on two-wavelength selection

    NASA Astrophysics Data System (ADS)

    Riad, Safaa M.; El-Rahman, Mohamed K. Abd; Fawaz, Esraa M.; Shehata, Mostafa A.

    2015-06-01

    Three sensitive, selective, and precise stability indicating spectrophotometric methods for the determination of the X-ray contrast agent, diatrizoate sodium (DTA) in the presence of its acidic degradation product (highly cytotoxic 3,5-diamino metabolite) and in pharmaceutical formulation, were developed and validated. The first method is ratio difference, the second one is the bivariate method, and the third one is the dual wavelength method. The calibration curves for the three proposed methods are linear over a concentration range of 2-24 μg/mL. The selectivity of the proposed methods was tested using laboratory prepared mixtures. The proposed methods have been successfully applied to the analysis of DTA in pharmaceutical dosage forms without interference from other dosage form additives. The results were statistically compared with the official US pharmacopeial method. No significant difference for either accuracy or precision was observed.

  3. Sparse Bayesian Learning for Identifying Imaging Biomarkers in AD Prediction

    PubMed Central

    Shen, Li; Qi, Yuan; Kim, Sungeun; Nho, Kwangsik; Wan, Jing; Risacher, Shannon L.; Saykin, Andrew J.

    2010-01-01

    We apply sparse Bayesian learning methods, automatic relevance determination (ARD) and predictive ARD (PARD), to Alzheimer’s disease (AD) classification to make accurate prediction and identify critical imaging markers relevant to AD at the same time. ARD is one of the most successful Bayesian feature selection methods. PARD is a powerful Bayesian feature selection method, and provides sparse models that is easy to interpret. PARD selects the model with the best estimate of the predictive performance instead of choosing the one with the largest marginal model likelihood. Comparative study with support vector machine (SVM) shows that ARD/PARD in general outperform SVM in terms of prediction accuracy. Additional comparison with surface-based general linear model (GLM) analysis shows that regions with strongest signals are identified by both GLM and ARD/PARD. While GLM P-map returns significant regions all over the cortex, ARD/PARD provide a small number of relevant and meaningful imaging markers with predictive power, including both cortical and subcortical measures. PMID:20879451

  4. Variable Selection for Support Vector Machines in Moderately High Dimensions

    PubMed Central

    Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze

    2015-01-01

    Summary The support vector machine (SVM) is a powerful binary classification tool with high accuracy and great flexibility. It has achieved great success, but its performance can be seriously impaired if many redundant covariates are included. Some efforts have been devoted to studying variable selection for SVMs, but asymptotic properties, such as variable selection consistency, are largely unknown when the number of predictors diverges to infinity. In this work, we establish a unified theory for a general class of nonconvex penalized SVMs. We first prove that in ultra-high dimensions, there exists one local minimizer to the objective function of nonconvex penalized SVMs possessing the desired oracle property. We further address the problem of nonunique local minimizers by showing that the local linear approximation algorithm is guaranteed to converge to the oracle estimator even in the ultra-high dimensional setting if an appropriate initial estimator is available. This condition on initial estimator is verified to be automatically valid as long as the dimensions are moderately high. Numerical examples provide supportive evidence. PMID:26778916

  5. Sensitive and selective liquid chromatography-tandem mass spectrometry method for the quantification of aniracetam in human plasma.

    PubMed

    Zhang, Jingjing; Liang, Jiabi; Tian, Yuan; Zhang, Zunjian; Chen, Yun

    2007-10-15

    A rapid, sensitive and selective LC-MS/MS method was developed and validated for the quantification of aniracetam in human plasma using estazolam as internal standard (IS). Following liquid-liquid extraction, the analytes were separated using a mobile phase of methanol-water (60:40, v/v) on a reverse phase C18 column and analyzed by a triple-quadrupole mass spectrometer in the selected reaction monitoring (SRM) mode using the respective [M+H]+ ions, m/z 220-->135 for aniracetam and m/z 295-->205 for the IS. The assay exhibited a linear dynamic range of 0.2-100 ng/mL for aniracetam in human plasma. The lower limit of quantification (LLOQ) was 0.2 ng/mL with a relative standard deviation of less than 15%. Acceptable precision and accuracy were obtained for concentrations over the standard curve range. The validated LC-MS/MS method has been successfully applied to study the pharmacokinetics of aniracetam in healthy male Chinese volunteers.

  6. A comparative study between three stability indicating spectrophotometric methods for the determination of diatrizoate sodium in presence of its cytotoxic degradation product based on two-wavelength selection.

    PubMed

    Riad, Safaa M; El-Rahman, Mohamed K Abd; Fawaz, Esraa M; Shehata, Mostafa A

    2015-06-15

    Three sensitive, selective, and precise stability indicating spectrophotometric methods for the determination of the X-ray contrast agent, diatrizoate sodium (DTA) in the presence of its acidic degradation product (highly cytotoxic 3,5-diamino metabolite) and in pharmaceutical formulation, were developed and validated. The first method is ratio difference, the second one is the bivariate method, and the third one is the dual wavelength method. The calibration curves for the three proposed methods are linear over a concentration range of 2-24 μg/mL. The selectivity of the proposed methods was tested using laboratory prepared mixtures. The proposed methods have been successfully applied to the analysis of DTA in pharmaceutical dosage forms without interference from other dosage form additives. The results were statistically compared with the official US pharmacopeial method. No significant difference for either accuracy or precision was observed. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Electrocardiogram classification using delay differential equations

    NASA Astrophysics Data System (ADS)

    Lainscsek, Claudia; Sejnowski, Terrence J.

    2013-06-01

    Time series analysis with nonlinear delay differential equations (DDEs) reveals nonlinear as well as spectral properties of the underlying dynamical system. Here, global DDE models were used to analyze 5 min data segments of electrocardiographic (ECG) recordings in order to capture distinguishing features for different heart conditions such as normal heart beat, congestive heart failure, and atrial fibrillation. The number of terms and delays in the model as well as the order of nonlinearity of the model have to be selected that are the most discriminative. The DDE model form that best separates the three classes of data was chosen by exhaustive search up to third order polynomials. Such an approach can provide deep insight into the nature of the data since linear terms of a DDE correspond to the main time-scales in the signal and the nonlinear terms in the DDE are related to nonlinear couplings between the harmonic signal parts. The DDEs were able to detect atrial fibrillation with an accuracy of 72%, congestive heart failure with an accuracy of 88%, and normal heart beat with an accuracy of 97% from 5 min of ECG, a much shorter time interval than required to achieve comparable performance with other methods.

  8. Comparison of Different Features and Classifiers for Driver Fatigue Detection Based on a Single EEG Channel

    PubMed Central

    2017-01-01

    Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG) channel. Four types of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE), and spectral entropy (PE), were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF). The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different. PMID:28255330

  9. Comparison of Different Features and Classifiers for Driver Fatigue Detection Based on a Single EEG Channel.

    PubMed

    Hu, Jianfeng

    2017-01-01

    Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG) channel. Four types of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE), and spectral entropy (PE), were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF). The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different.

  10. Can the prevalence of high blood drug concentrations in a population be estimated by analysing oral fluid? A study of tetrahydrocannabinol and amphetamine.

    PubMed

    Gjerde, Hallvard; Verstraete, Alain

    2010-02-25

    To study several methods for estimating the prevalence of high blood concentrations of tetrahydrocannabinol and amphetamine in a population of drug users by analysing oral fluid (saliva). Five methods were compared, including simple calculation procedures dividing the drug concentrations in oral fluid by average or median oral fluid/blood (OF/B) drug concentration ratios or linear regression coefficients, and more complex Monte Carlo simulations. Populations of 311 cannabis users and 197 amphetamine users from the Rosita-2 Project were studied. The results of a feasibility study suggested that the Monte Carlo simulations might give better accuracies than simple calculations if good data on OF/B ratios is available. If using only 20 randomly selected OF/B ratios, a Monte Carlo simulation gave the best accuracy but not the best precision. Dividing by the OF/B regression coefficient gave acceptable accuracy and precision, and was therefore the best method. None of the methods gave acceptable accuracy if the prevalence of high blood drug concentrations was less than 15%. Dividing the drug concentration in oral fluid by the OF/B regression coefficient gave an acceptable estimation of high blood drug concentrations in a population, and may therefore give valuable additional information on possible drug impairment, e.g. in roadside surveys of drugs and driving. If good data on the distribution of OF/B ratios are available, a Monte Carlo simulation may give better accuracy. 2009 Elsevier Ireland Ltd. All rights reserved.

  11. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  12. The novel application of artificial neural network on bioelectrical impedance analysis to assess the body composition in elderly

    PubMed Central

    2013-01-01

    Background This study aims to improve accuracy of Bioelectrical Impedance Analysis (BIA) prediction equations for estimating fat free mass (FFM) of the elderly by using non-linear Back Propagation Artificial Neural Network (BP-ANN) model and to compare the predictive accuracy with the linear regression model by using energy dual X-ray absorptiometry (DXA) as reference method. Methods A total of 88 Taiwanese elderly adults were recruited in this study as subjects. Linear regression equations and BP-ANN prediction equation were developed using impedances and other anthropometrics for predicting the reference FFM measured by DXA (FFMDXA) in 36 male and 26 female Taiwanese elderly adults. The FFM estimated by BIA prediction equations using traditional linear regression model (FFMLR) and BP-ANN model (FFMANN) were compared to the FFMDXA. The measuring results of an additional 26 elderly adults were used to validate than accuracy of the predictive models. Results The results showed the significant predictors were impedance, gender, age, height and weight in developed FFMLR linear model (LR) for predicting FFM (coefficient of determination, r2 = 0.940; standard error of estimate (SEE) = 2.729 kg; root mean square error (RMSE) = 2.571kg, P < 0.001). The above predictors were set as the variables of the input layer by using five neurons in the BP-ANN model (r2 = 0.987 with a SD = 1.192 kg and relatively lower RMSE = 1.183 kg), which had greater (improved) accuracy for estimating FFM when compared with linear model. The results showed a better agreement existed between FFMANN and FFMDXA than that between FFMLR and FFMDXA. Conclusion When compared the performance of developed prediction equations for estimating reference FFMDXA, the linear model has lower r2 with a larger SD in predictive results than that of BP-ANN model, which indicated ANN model is more suitable for estimating FFM. PMID:23388042

  13. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  14. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  15. Verification of spectrophotometric method for nitrate analysis in water samples

    NASA Astrophysics Data System (ADS)

    Kurniawati, Puji; Gusrianti, Reny; Dwisiwi, Bledug Bernanti; Purbaningtias, Tri Esti; Wiyantoko, Bayu

    2017-12-01

    The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.

  16. Vertical bone measurements from cone beam computed tomography images using different software packages.

    PubMed

    Vasconcelos, Taruska Ventorini; Neves, Frederico Sampaio; Moraes, Lívia Almeida Bueno; Freitas, Deborah Queiroz

    2015-01-01

    This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.

  17. Unsupervised Wishart Classfication of Wetlands in Newfoundland, Canada Using Polsar Data Based on Fisher Linear Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Homayouni, S.

    2016-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) imagery is a complex multi-dimensional dataset, which is an important source of information for various natural resources and environmental classification and monitoring applications. PolSAR imagery produces valuable information by observing scattering mechanisms from different natural and man-made objects. Land cover mapping using PolSAR data classification is one of the most important applications of SAR remote sensing earth observations, which have gained increasing attention in the recent years. However, one of the most challenging aspects of classification is selecting features with maximum discrimination capability. To address this challenge, a statistical approach based on the Fisher Linear Discriminant Analysis (FLDA) and the incorporation of physical interpretation of PolSAR data into classification is proposed in this paper. After pre-processing of PolSAR data, including the speckle reduction, the H/α classification is used in order to classify the basic scattering mechanisms. Then, a new method for feature weighting, based on the fusion of FLDA and physical interpretation, is implemented. This method proves to increase the classification accuracy as well as increasing between-class discrimination in the final Wishart classification. The proposed method was applied to a full polarimetric C-band RADARSAT-2 data set from Avalon area, Newfoundland and Labrador, Canada. This imagery has been acquired in June 2015, and covers various types of wetlands including bogs, fens, marshes and shallow water. The results were compared with the standard Wishart classification, and an improvement of about 20% was achieved in the overall accuracy. This method provides an opportunity for operational wetland classification in northern latitude with high accuracy using only SAR polarimetric data.

  18. Accuracy of genetic code translation and its orthogonal corruption by aminoglycosides and Mg2+ ions.

    PubMed

    Zhang, Jingji; Pavlov, Michael Y; Ehrenberg, Måns

    2018-02-16

    We studied the effects of aminoglycosides and changing Mg2+ ion concentration on the accuracy of initial codon selection by aminoacyl-tRNA in ternary complex with elongation factor Tu and GTP (T3) on mRNA programmed ribosomes. Aminoglycosides decrease the accuracy by changing the equilibrium constants of 'monitoring bases' A1492, A1493 and G530 in 16S rRNA in favor of their 'activated' state by large, aminoglycoside-specific factors, which are the same for cognate and near-cognate codons. Increasing Mg2+ concentration decreases the accuracy by slowing dissociation of T3 from its initial codon- and aminoglycoside-independent binding state on the ribosome. The distinct accuracy-corrupting mechanisms for aminoglycosides and Mg2+ ions prompted us to re-interpret previous biochemical experiments and functional implications of existing high resolution ribosome structures. We estimate the upper thermodynamic limit to the accuracy, the 'intrinsic selectivity' of the ribosome. We conclude that aminoglycosides do not alter the intrinsic selectivity but reduce the fraction of it that is expressed as the accuracy of initial selection. We suggest that induced fit increases the accuracy and speed of codon reading at unaltered intrinsic selectivity of the ribosome.

  19. Simultaneous determination of atenolol, chlorthalidone and amiloride in pharmaceutical preparations by capillary zone electrophoresis with ultraviolet detection.

    PubMed

    Al Azzam, Khaldun M; Saad, Bahruddin; Aboul-Enein, Hassan Y

    2010-09-01

    Capillary zone electrophoresis methods for the simultaneous determination of the beta-blocker drugs, atenolol, chlorthalidone and amiloride, in pharmaceutical formulations have been developed. The influences of several factors (buffer pH, concentration, applied voltage, capillary temperature and injection time) were studied. Using phenobarbital as internal standard, the analytes were all separated in less than 4 min. The separation was carried out in normal polarity mode at 25 degrees C, 25 kV and using hydrodynamic injection (10 s). The separation was effected in an uncoated fused-silica capillary (75 mum i.d. x 52 cm) and a background electrolyte of 25 mm H(3)PO(4) adjusted with 1 m NaOH solution (pH 9.0) and detection at 198 nm. The method was validated with respect to linearity, limit of detection and quantification, accuracy, precision and selectivity. Calibration curves were linear over the range 1-250 microg/mL for atenolol and chlorthalidone and from 2.5-250 microg/mL for amiloride. The relative standard deviations of intra- and inter-day migration times and corrected peak areas were less than 6.0%. The method showed good precision and accuracy and was successfully applied to the simultaneous determination of atenolol, chlorthalidone and amiloride in various pharmaceutical tablets formulations. 2010 John Wiley & Sons, Ltd.

  20. Development and validation of carbofuran and 3-hydroxycarbofuran analysis by high-pressure liquid chromatography with diode array detector (HPLC-DAD) for forensic Veterinary Medicine.

    PubMed

    Gonçalves, Vagner; Hazarbassanov, Nicolle Queiroz; de Siqueira, Adriana; Florio, Jorge Camilo; Ciscato, Claudia Helena Pastor; Maiorka, Paulo Cesar; Fukushima, André Rinaldi; de Souza Spinosa, Helenice

    2017-10-15

    Agricultural pesticides used with the criminal intent to intoxicate domestic and wild animals are a serious concern in Veterinary Medicine. In order to identify the pesticide carbofuran and its metabolite 3- hydroxycarbofuran in animals suspected of exogenous intoxication a high pressure liquid chromatography with diode array detector (HPLC-DAD) method was developed and validated in stomach contents, liver, vitreous humor and blood. The method was evaluated using biological samples from seven different animal species. The following parameters of analytical validation were evaluated: linearity, precision, accuracy, selectivity, recovery and matrix effect. The method was linear at the range of 6.25-100μg/mL and the correlation coefficient (r 2 ) values were >0.9811 for all matrices. The precision and accuracy of the method was determined by coefficient of variation (CV) and the relative standard deviation error (RSE), and both were less than 15%. Recovery ranged from 74.29 to 100.1% for carbofuran and from 64.72 to 100.61% for 3-hydroxycarbofuran. There were no significant interfering peaks or matrix effects. This method was suitable for detecting 25 positive cases for carbofuran amongst a total of 64 animal samples suspected of poisoning brought to the Toxicology Diagnostic Laboratory, School of Veterinary Medicine and Animal Sciences, University of Sao Paulo. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Compensatory selection for roads over natural linear features by wolves in northern Ontario: Implications for caribou conservation

    PubMed Central

    Patterson, Brent R.; Anderson, Morgan L.; Rodgers, Arthur R.; Vander Vennen, Lucas M.; Fryxell, John M.

    2017-01-01

    Woodland caribou (Rangifer tarandus caribou) in Ontario are a threatened species that have experienced a substantial retraction of their historic range. Part of their decline has been attributed to increasing densities of anthropogenic linear features such as trails, roads, railways, and hydro lines. These features have been shown to increase the search efficiency and kill rate of wolves. However, it is unclear whether selection for anthropogenic linear features is additive or compensatory to selection for natural (water) linear features which may also be used for travel. We studied the selection of water and anthropogenic linear features by 52 resident wolves (Canis lupus x lycaon) over four years across three study areas in northern Ontario that varied in degrees of forestry activity and human disturbance. We used Euclidean distance-based resource selection functions (mixed-effects logistic regression) at the seasonal range scale with random coefficients for distance to water linear features, primary/secondary roads/railways, and hydro lines, and tertiary roads to estimate the strength of selection for each linear feature and for several habitat types, while accounting for availability of each feature. Next, we investigated the trade-off between selection for anthropogenic and water linear features. Wolves selected both anthropogenic and water linear features; selection for anthropogenic features was stronger than for water during the rendezvous season. Selection for anthropogenic linear features increased with increasing density of these features on the landscape, while selection for natural linear features declined, indicating compensatory selection of anthropogenic linear features. These results have implications for woodland caribou conservation. Prey encounter rates between wolves and caribou seem to be strongly influenced by increasing linear feature densities. This behavioral mechanism–a compensatory functional response to anthropogenic linear feature density resulting in decreased use of natural travel corridors–has negative consequences for the viability of woodland caribou. PMID:29117234

  2. Compensatory selection for roads over natural linear features by wolves in northern Ontario: Implications for caribou conservation.

    PubMed

    Newton, Erica J; Patterson, Brent R; Anderson, Morgan L; Rodgers, Arthur R; Vander Vennen, Lucas M; Fryxell, John M

    2017-01-01

    Woodland caribou (Rangifer tarandus caribou) in Ontario are a threatened species that have experienced a substantial retraction of their historic range. Part of their decline has been attributed to increasing densities of anthropogenic linear features such as trails, roads, railways, and hydro lines. These features have been shown to increase the search efficiency and kill rate of wolves. However, it is unclear whether selection for anthropogenic linear features is additive or compensatory to selection for natural (water) linear features which may also be used for travel. We studied the selection of water and anthropogenic linear features by 52 resident wolves (Canis lupus x lycaon) over four years across three study areas in northern Ontario that varied in degrees of forestry activity and human disturbance. We used Euclidean distance-based resource selection functions (mixed-effects logistic regression) at the seasonal range scale with random coefficients for distance to water linear features, primary/secondary roads/railways, and hydro lines, and tertiary roads to estimate the strength of selection for each linear feature and for several habitat types, while accounting for availability of each feature. Next, we investigated the trade-off between selection for anthropogenic and water linear features. Wolves selected both anthropogenic and water linear features; selection for anthropogenic features was stronger than for water during the rendezvous season. Selection for anthropogenic linear features increased with increasing density of these features on the landscape, while selection for natural linear features declined, indicating compensatory selection of anthropogenic linear features. These results have implications for woodland caribou conservation. Prey encounter rates between wolves and caribou seem to be strongly influenced by increasing linear feature densities. This behavioral mechanism-a compensatory functional response to anthropogenic linear feature density resulting in decreased use of natural travel corridors-has negative consequences for the viability of woodland caribou.

  3. A universal deep learning approach for modeling the flow of patients under different severities.

    PubMed

    Jiang, Shancheng; Chin, Kwai-Sang; Tsui, Kwok L

    2018-02-01

    The Accident and Emergency Department (A&ED) is the frontline for providing emergency care in hospitals. Unfortunately, relative A&ED resources have failed to keep up with continuously increasing demand in recent years, which leads to overcrowding in A&ED. Knowing the fluctuation of patient arrival volume in advance is a significant premise to relieve this pressure. Based on this motivation, the objective of this study is to explore an integrated framework with high accuracy for predicting A&ED patient flow under different triage levels, by combining a novel feature selection process with deep neural networks. Administrative data is collected from an actual A&ED and categorized into five groups based on different triage levels. A genetic algorithm (GA)-based feature selection algorithm is improved and implemented as a pre-processing step for this time-series prediction problem, in order to explore key features affecting patient flow. In our improved GA, a fitness-based crossover is proposed to maintain the joint information of multiple features during iterative process, instead of traditional point-based crossover. Deep neural networks (DNN) is employed as the prediction model to utilize their universal adaptability and high flexibility. In the model-training process, the learning algorithm is well-configured based on a parallel stochastic gradient descent algorithm. Two effective regularization strategies are integrated in one DNN framework to avoid overfitting. All introduced hyper-parameters are optimized efficiently by grid-search in one pass. As for feature selection, our improved GA-based feature selection algorithm has outperformed a typical GA and four state-of-the-art feature selection algorithms (mRMR, SAFS, VIFR, and CFR). As for the prediction accuracy of proposed integrated framework, compared with other frequently used statistical models (GLM, seasonal-ARIMA, ARIMAX, and ANN) and modern machine models (SVM-RBF, SVM-linear, RF, and R-LASSO), the proposed integrated "DNN-I-GA" framework achieves higher prediction accuracy on both MAPE and RMSE metrics in pairwise comparisons. The contribution of our study is two-fold. Theoretically, the traditional GA-based feature selection process is improved to have less hyper-parameters and higher efficiency, and the joint information of multiple features is maintained by fitness-based crossover operator. The universal property of DNN is further enhanced by merging different regularization strategies. Practically, features selected by our improved GA can be used to acquire an underlying relationship between patient flows and input features. Predictive values are significant indicators of patients' demand and can be used by A&ED managers to make resource planning and allocation. High accuracy achieved by the present framework in different cases enhances the reliability of downstream decision makings. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Xin, E-mail: xinshih86029@gmail.com; Zhao, Xiangmo, E-mail: xinshih86029@gmail.com; Hui, Fei, E-mail: xinshih86029@gmail.com

    Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations ismore » constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.« less

  5. Influence of the starting temperature of calorimetric measurements on the accuracy of determined magnetocaloric effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreno-Ramirez, L. M.; Franco, V.; Conde, A.

    Availability of a restricted heat capacity data range has a clear influence on the accuracy of calculated magnetocaloric effect, as confirmed by both numerical simulations and experimental measurements. Simulations using the Bean-Rodbell model show that, in general, the approximated magnetocaloric effect curves calculated using a linear extrapolation of the data starting from a selected temperature point down to zero kelvin deviate in a non-monotonic way from those correctly calculated by fully integrating the data from near zero temperatures. However, we discovered that a particular temperature range exists where the approximated magnetocaloric calculation provides the same result as the fully integratedmore » one. These specific truncated intervals exist for both first and second order phase transitions and are the same for the adiabatic temperature change and magnetic entropy change curves. Here, the effect of this truncated integration in real samples was confirmed using heat capacity data of Gd metal and Gd 5Si 2Ge 2 compound measured from near zero temperatures.« less

  6. Influence of the starting temperature of calorimetric measurements on the accuracy of determined magnetocaloric effect

    DOE PAGES

    Moreno-Ramirez, L. M.; Franco, V.; Conde, A.; ...

    2018-02-27

    Availability of a restricted heat capacity data range has a clear influence on the accuracy of calculated magnetocaloric effect, as confirmed by both numerical simulations and experimental measurements. Simulations using the Bean-Rodbell model show that, in general, the approximated magnetocaloric effect curves calculated using a linear extrapolation of the data starting from a selected temperature point down to zero kelvin deviate in a non-monotonic way from those correctly calculated by fully integrating the data from near zero temperatures. However, we discovered that a particular temperature range exists where the approximated magnetocaloric calculation provides the same result as the fully integratedmore » one. These specific truncated intervals exist for both first and second order phase transitions and are the same for the adiabatic temperature change and magnetic entropy change curves. Here, the effect of this truncated integration in real samples was confirmed using heat capacity data of Gd metal and Gd 5Si 2Ge 2 compound measured from near zero temperatures.« less

  7. Application of texture analysis method for mammogram density classification

    NASA Astrophysics Data System (ADS)

    Nithya, R.; Santhi, B.

    2017-07-01

    Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.

  8. Lipidomics by ultrahigh performance liquid chromatography-high resolution mass spectrometry and its application to complex biological samples

    PubMed Central

    Triebl, Alexander; Trötzmüller, Martin; Hartler, Jürgen; Stojakovic, Tatjana; Köfeler, Harald C

    2018-01-01

    An improved approach for selective and sensitive identification and quantitation of lipid molecular species using reversed phase chromatography coupled to high resolution mass spectrometry was developed. The method is applicable to a wide variety of biological matrices using a simple liquid-liquid extraction procedure. Together, this approach combines three selectivity criteria: Reversed phase chromatography separates lipids according to their acyl chain length and degree of unsaturation and is capable of resolving positional isomers of lysophospholipids, as well as structural isomers of diacyl phospholipids and glycerolipids. Orbitrap mass spectrometry delivers the elemental composition of both positive and negative ions with high mass accuracy. Finally, automatically generated tandem mass spectra provide structural insight into numerous glycerolipids, phospholipids, and sphingolipids within a single run. Method validation resulted in a linearity range of more than four orders of magnitude, good values for accuracy and precision at biologically relevant concentration levels, and limits of quantitation of a few femtomoles on column. Hundreds of lipid molecular species were detected and quantified in three different biological matrices, which cover well the wide variety and complexity of various model organisms in lipidomic research. Together with a reliable software package, this method is a prime choice for global lipidomic analysis of even the most complex biological samples. PMID:28415015

  9. Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung

    2013-01-01

    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713

  10. Lipidomics by ultrahigh performance liquid chromatography-high resolution mass spectrometry and its application to complex biological samples.

    PubMed

    Triebl, Alexander; Trötzmüller, Martin; Hartler, Jürgen; Stojakovic, Tatjana; Köfeler, Harald C

    2017-05-15

    An improved approach for selective and sensitive identification and quantitation of lipid molecular species using reversed phase chromatography coupled to high resolution mass spectrometry was developed. The method is applicable to a wide variety of biological matrices using a simple liquid-liquid extraction procedure. Together, this approach combines multiple selectivity criteria: Reversed phase chromatography separates lipids according to their acyl chain length and degree of unsaturation and is capable of resolving positional isomers of lysophospholipids, as well as structural isomers of diacyl phospholipids and glycerolipids. Orbitrap mass spectrometry delivers the elemental composition of both positive and negative ions with high mass accuracy. Finally, automatically generated tandem mass spectra provide structural insight into numerous glycerolipids, phospholipids, and sphingolipids within a single run. Calibration showed linearity ranges of more than four orders of magnitude, good values for accuracy and precision at biologically relevant concentration levels, and limits of quantitation of a few femtomoles on column. Hundreds of lipid molecular species were detected and quantified in three different biological matrices, which cover well the wide variety and complexity of various model organisms in lipidomic research. Together with a software package, this method is a prime choice for global lipidomic analysis of even the most complex biological samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Using the Animal Model to Accelerate Response to Selection in a Self-Pollinating Crop

    PubMed Central

    Cowling, Wallace A.; Stefanova, Katia T.; Beeck, Cameron P.; Nelson, Matthew N.; Hargreaves, Bonnie L. W.; Sass, Olaf; Gilmour, Arthur R.; Siddique, Kadambot H. M.

    2015-01-01

    We used the animal model in S0 (F1) recurrent selection in a self-pollinating crop including, for the first time, phenotypic and relationship records from self progeny, in addition to cross progeny, in the pedigree. We tested the model in Pisum sativum, the autogamous annual species used by Mendel to demonstrate the particulate nature of inheritance. Resistance to ascochyta blight (Didymella pinodes complex) in segregating S0 cross progeny was assessed by best linear unbiased prediction over two cycles of selection. Genotypic concurrence across cycles was provided by pure-line ancestors. From cycle 1, 102/959 S0 plants were selected, and their S1 self progeny were intercrossed and selfed to produce 430 S0 and 575 S2 individuals that were evaluated in cycle 2. The analysis was improved by including all genetic relationships (with crossing and selfing in the pedigree), additive and nonadditive genetic covariances between cycles, fixed effects (cycles and spatial linear trends), and other random effects. Narrow-sense heritability for ascochyta blight resistance was 0.305 and 0.352 in cycles 1 and 2, respectively, calculated from variance components in the full model. The fitted correlation of predicted breeding values across cycles was 0.82. Average accuracy of predicted breeding values was 0.851 for S2 progeny of S1 parent plants and 0.805 for S0 progeny tested in cycle 2, and 0.878 for S1 parent plants for which no records were available. The forecasted response to selection was 11.2% in the next cycle with 20% S0 selection proportion. This is the first application of the animal model to cyclic selection in heterozygous populations of selfing plants. The method can be used in genomic selection, and for traits measured on S0-derived bulks such as grain yield. PMID:25943522

  12. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043

  13. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Goodman, Joseph W.

    1989-01-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  14. Direct Linear Transformation Method for Three-Dimensional Cinematography

    ERIC Educational Resources Information Center

    Shapiro, Robert

    1978-01-01

    The ability of Direct Linear Transformation Method for three-dimensional cinematography to locate points in space was shown to meet the accuracy requirements associated with research on human movement. (JD)

  15. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    PubMed

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.

  16. Selective and rapid determination of tadalafil and finasteride using solid phase extraction by high performance liquid chromatography and tandem mass spectrometry.

    PubMed

    Pappula, Nagaraju; Kodali, Balaji; Datla, Peda Varma

    2018-04-15

    Highly selective and fast liquid chromatography-tandem mass spectrometric (LC-MS/MS) method was developed and validated for simultaneous determination of tadalafil (TDL) and finasteride (FNS) in human plasma. The method was successfully applied for analysis of TDL and FNS samples in clinical study. The method was validated as per USFDA (United States Food and Drug Administration), EMA (European Medicines Agency), and ANVISA (Agência Nacional de Vigilância Sanitária-Brazil) bio analytical method validation guidelines. Glyburide (GLB) was used as common internal standard (ISTD) for both analytes. The selected multiple reaction monitoring (MRM) transitions for mass spectrometric analysis were m/z 390.2/268.2, m/z 373.3/305.4 and m/z 494.2/369.1 for TDL, FNS and ISTD respectively. The extraction of analytes and ISTD was accomplished by a simple solid phase extraction (SPE) procedure. Rapid analysis time was achieved on Zorbax Eclipse C18 column (50 × 4.6 mm, 5 μm). The calibration ranges for TDL and FNS were 5-800 ng/ml and 0.2-30 ng/ml respectively. The results of precision and accuracy, linearity, recovery and matrix effect of the method are acceptable. The accuracy was in the range of 92.9%-106.4% and method precision was also good; %CV was less than 8.1%. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    PubMed Central

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  18. Evaluation of fiber Bragg grating sensor interrogation using InGaAs linear detector arrays and Gaussian approximation on embedded hardware.

    PubMed

    Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan

    2018-02-01

    Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.

  19. Performance improvement for optimization of the non-linear geometric fitting problem in manufacturing metrology

    NASA Astrophysics Data System (ADS)

    Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano

    2014-08-01

    Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.

  20. Evaluation of fiber Bragg grating sensor interrogation using InGaAs linear detector arrays and Gaussian approximation on embedded hardware

    NASA Astrophysics Data System (ADS)

    Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan

    2018-02-01

    Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.

  1. High-Speed Digital Scan Converter for High-Frequency Ultrasound Sector Scanners

    PubMed Central

    Chang, Jin Ho; Yen, Jesse T.; Shung, K. Kirk

    2008-01-01

    This paper presents a high-speed digital scan converter (DSC) capable of providing more than 400 images per second, which is necessary to examine the activities of the mouse heart whose rate is 5–10 beats per second. To achieve the desired high-speed performance in cost-effective manner, the DSC developed adopts a linear interpolation algorithm in which two nearest samples to each object pixel of a monitor are selected and only angular interpolation is performed. Through computer simulation with the Field II program, its accuracy was investigated by comparing it to that of bilinear interpolation known as the best algorithm in terms of accuracy and processing speed. The simulation results show that the linear interpolation algorithm is capable of providing an acceptable image quality, which means that the difference of the root mean square error (RMSE) values of the linear and bilinear interpolation algorithms is below 1 %, if the sample rate of the envelope samples is at least four times higher than the Nyquist rate for the baseband component of echo signals. The designed DSC was implemented with a single FPGA (Stratix EP1S60F1020C6, Altera Corporation, San Jose, CA) on a DSC board that is a part of a high-speed ultrasound imaging system developed. The temporal and spatial resolutions of the implemented DSC were evaluated by examining its maximum processing time with a time stamp indicating when an image is completely formed and wire phantom testing, respectively. The experimental results show that the implemented DSC is capable of providing images at the rate of 400 images per second with negligible processing error. PMID:18430449

  2. Geometric accuracy of Landsat-4 and Landsat-5 Thematic Mapper images.

    USGS Publications Warehouse

    Borgeson, W.T.; Batson, R.M.; Kieffer, H.H.

    1985-01-01

    The geometric accuracy of the Landsat Thematic Mappers was assessed by a linear least-square comparison of the positions of conspicuous ground features in digital images with their geographic locations as determined from 1:24 000-scale maps. For a Landsat-5 image, the single-dimension standard deviations of the standard digital product, and of this image with additional linear corrections, are 11.2 and 10.3 m, respectively (0.4 pixel). An F-test showed that skew and affine distortion corrections are not significant. At this level of accuracy, the granularity of the digital image and the probable inaccuracy of the 1:24 000 maps began to affect the precision of the comparison. The tested image, even with a moderate accuracy loss in the digital-to-graphic conversion, meets National Horizontal Map Accuracy standards for scales of 1:100 000 and smaller. Two Landsat-4 images, obtained with the Multispectral Scanner on and off, and processed by an interim software system, contain significant skew and affine distortions. -Authors

  3. a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Li, J.; Wan, Y.; Gao, X.

    2012-07-01

    With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.

  4. Optimal Wavelength Selection on Hyperspectral Data with Fused Lasso for Biomass Estimation of Tropical Rain Forest

    NASA Astrophysics Data System (ADS)

    Takayama, T.; Iwasaki, A.

    2016-06-01

    Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  5. Individual Finger Control of the Modular Prosthetic Limb using High-Density Electrocorticography in a Human Subject

    PubMed Central

    Fifer, Matthew S.; Johannes, Matthew S.; Katyal, Kapil D.; Para, Matthew P.; Armiger, Robert; Anderson, William S.; Thakor, Nitish V.; Wester, Brock A.; Crone, Nathan E.

    2016-01-01

    Objective We used native sensorimotor representations of fingers in a brain-machine interface to achieve immediate online control of individual prosthetic fingers. Approach Using high gamma responses recorded with a high-density ECoG array, we rapidly mapped the functional anatomy of cued finger movements. We used these cortical maps to select ECoG electrodes for a hierarchical linear discriminant analysis classification scheme to predict: 1) if any finger was moving, and, if so, 2) which digit was moving. To account for sensory feedback, we also mapped the spatiotemporal activation elicited by vibrotactile stimulation. Finally, we used this prediction framework to provide immediate online control over individual fingers of the Johns Hopkins University Applied Physics Laboratory (JHU/APL) Modular Prosthetic Limb (MPL). Main Results The balanced classification accuracy for detection of movements during the online control session was 92% (chance: 50%). At the onset of movement, finger classification was 76% (chance: 20%), and 88% (chance: 25%) if the pinky and ring finger movements were coupled. Balanced accuracy of fully flexing the cued finger was 64%, and 77% had we combined pinky and ring commands. Offline decoding yielded a peak finger decoding accuracy of 96.5% (chance: 20%) when using an optimized selection of electrodes. Offline analysis demonstrated significant finger-specific activations throughout sensorimotor cortex. Activations either prior to movement onset or during sensory feedback led to discriminable finger control. Significance Our results demonstrate the ability of ECoG-based BMIs to leverage the native functional anatomy of sensorimotor cortical populations to immediately control individual finger movements in real time. PMID:26863276

  6. Individual finger control of a modular prosthetic limb using high-density electrocorticography in a human subject

    NASA Astrophysics Data System (ADS)

    Hotson, Guy; McMullen, David P.; Fifer, Matthew S.; Johannes, Matthew S.; Katyal, Kapil D.; Para, Matthew P.; Armiger, Robert; Anderson, William S.; Thakor, Nitish V.; Wester, Brock A.; Crone, Nathan E.

    2016-04-01

    Objective. We used native sensorimotor representations of fingers in a brain-machine interface (BMI) to achieve immediate online control of individual prosthetic fingers. Approach. Using high gamma responses recorded with a high-density electrocorticography (ECoG) array, we rapidly mapped the functional anatomy of cued finger movements. We used these cortical maps to select ECoG electrodes for a hierarchical linear discriminant analysis classification scheme to predict: (1) if any finger was moving, and, if so, (2) which digit was moving. To account for sensory feedback, we also mapped the spatiotemporal activation elicited by vibrotactile stimulation. Finally, we used this prediction framework to provide immediate online control over individual fingers of the Johns Hopkins University Applied Physics Laboratory modular prosthetic limb. Main results. The balanced classification accuracy for detection of movements during the online control session was 92% (chance: 50%). At the onset of movement, finger classification was 76% (chance: 20%), and 88% (chance: 25%) if the pinky and ring finger movements were coupled. Balanced accuracy of fully flexing the cued finger was 64%, and 77% had we combined pinky and ring commands. Offline decoding yielded a peak finger decoding accuracy of 96.5% (chance: 20%) when using an optimized selection of electrodes. Offline analysis demonstrated significant finger-specific activations throughout sensorimotor cortex. Activations either prior to movement onset or during sensory feedback led to discriminable finger control. Significance. Our results demonstrate the ability of ECoG-based BMIs to leverage the native functional anatomy of sensorimotor cortical populations to immediately control individual finger movements in real time.

  7. Sulfadiazine-selective determination in aquaculture environment: selective potentiometric transduction by neutral or charged ionophores.

    PubMed

    Almeida, S A A; Heitor, A M; Montenegro, M C B S M; Sales, M G F

    2011-09-15

    Solid-contact sensors for the selective screening of sulfadiazine (SDZ) in aquaculture waters are reported. Sensor surfaces were made from PVC membranes doped with tetraphenylporphyrin-manganese(III) chloride, α-cyclodextrin, β-cyclodextrin, or γ-cyclodextrin ionophores that were dispersed in plasticizer. Some membranes also presented a positive or a negatively charged additive. Phorphyrin-based sensors relied on a charged carrier mechanism. They exhibited a near-Nernstian response with slopes of 52 mV decade(-1) and detection limits of 3.91×10(-5) mol L(-1). The addition of cationic lipophilic compounds to the membrane originated Nernstian behaviours, with slopes ranging 59.7-62.0 mV decade(-1) and wider linear ranges. Cyclodextrin-based sensors acted as neutral carriers. In general, sensors with positively charged additives showed an improved potentiometric performance when compared to those without additive. Some SDZ selective membranes displayed higher slopes and extended linear concentration ranges with an increasing amount of additive (always <100% ionophore). The sensors were independent from the pH of test solutions within 2-7. The sensors displayed fast response, always <15s. In general, a good discriminating ability was found in real sample environment. The sensors were successfully applied to the fast screening of SDZ in real waters samples from aquaculture fish farms. The method offered the advantages of simplicity, accuracy, and automation feasibility. The sensing membrane may contribute to the development of small devices allowing in locus measurements of sulfadiazine or parent-drugs. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  9. A soft computing based approach using modified selection strategy for feature reduction of medical systems.

    PubMed

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.

  10. A Soft Computing Based Approach Using Modified Selection Strategy for Feature Reduction of Medical Systems

    PubMed Central

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172

  11. Research of Face Recognition with Fisher Linear Discriminant

    NASA Astrophysics Data System (ADS)

    Rahim, R.; Afriliansyah, T.; Winata, H.; Nofriansyah, D.; Ratnadewi; Aryza, S.

    2018-01-01

    Face identification systems are developing rapidly, and these developments drive the advancement of biometric-based identification systems that have high accuracy. However, to develop a good face recognition system and to have high accuracy is something that’s hard to find. Human faces have diverse expressions and attribute changes such as eyeglasses, mustache, beard and others. Fisher Linear Discriminant (FLD) is a class-specific method that distinguishes facial image images into classes and also creates distance between classes and intra classes so as to produce better classification.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron

    Moment-based acceleration via the development of “high-order, low-order” (HO-LO) algorithms has provided substantial accuracy and efficiency enhancements for solutions of the nonlinear, thermal radiative transfer equations by CCS-2 and T-3 staff members. Accuracy enhancements over traditional, linearized methods are obtained by solving a nonlinear, timeimplicit HO-LO system via a Jacobian-free Newton Krylov procedure. This also prevents the appearance of non-physical maximum principle violations (“temperature spikes”) associated with linearization. Efficiency enhancements are obtained in part by removing “effective scattering” from the linearized system. In this highlight, we summarize recent work in which we formally extended the HO-LO radiation algorithm to includemore » operator-split radiation-hydrodynamics.« less

  13. Impact of Linearity and Write Noise of Analog Resistive Memory Devices in a Neural Algorithm Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs-Gedrim, Robin B.; Agarwal, Sapan; Knisely, Kathrine E.

    Resistive memory (ReRAM) shows promise for use as an analog synapse element in energy-efficient neural network algorithm accelerators. A particularly important application is the training of neural networks, as this is the most computationally-intensive procedure in using a neural algorithm. However, training a network with analog ReRAM synapses can significantly reduce the accuracy at the algorithm level. In order to assess this degradation, analog properties of ReRAM devices were measured and hand-written digit recognition accuracy was modeled for the training using backpropagation. Bipolar filamentary devices utilizing three material systems were measured and compared: one oxygen vacancy system, Ta-TaO x, andmore » two conducting metallization systems, Cu-SiO 2, and Ag/chalcogenide. Analog properties and conductance ranges of the devices are optimized by measuring the response to varying voltage pulse characteristics. Key analog device properties which degrade the accuracy are update linearity and write noise. Write noise may improve as a function of device manufacturing maturity, but write nonlinearity appears relatively consistent among the different device material systems and is found to be the most significant factor affecting accuracy. As a result, this suggests that new materials and/or fundamentally different resistive switching mechanisms may be required to improve device linearity and achieve higher algorithm training accuracy.« less

  14. Impact of Linearity and Write Noise of Analog Resistive Memory Devices in a Neural Algorithm Accelerator

    DOE PAGES

    Jacobs-Gedrim, Robin B.; Agarwal, Sapan; Knisely, Kathrine E.; ...

    2017-12-01

    Resistive memory (ReRAM) shows promise for use as an analog synapse element in energy-efficient neural network algorithm accelerators. A particularly important application is the training of neural networks, as this is the most computationally-intensive procedure in using a neural algorithm. However, training a network with analog ReRAM synapses can significantly reduce the accuracy at the algorithm level. In order to assess this degradation, analog properties of ReRAM devices were measured and hand-written digit recognition accuracy was modeled for the training using backpropagation. Bipolar filamentary devices utilizing three material systems were measured and compared: one oxygen vacancy system, Ta-TaO x, andmore » two conducting metallization systems, Cu-SiO 2, and Ag/chalcogenide. Analog properties and conductance ranges of the devices are optimized by measuring the response to varying voltage pulse characteristics. Key analog device properties which degrade the accuracy are update linearity and write noise. Write noise may improve as a function of device manufacturing maturity, but write nonlinearity appears relatively consistent among the different device material systems and is found to be the most significant factor affecting accuracy. As a result, this suggests that new materials and/or fundamentally different resistive switching mechanisms may be required to improve device linearity and achieve higher algorithm training accuracy.« less

  15. Accuracy of analytic energy level formulas applied to hadronic spectroscopy of heavy mesons

    NASA Technical Reports Server (NTRS)

    Badavi, Forooz F.; Norbury, John W.; Wilson, John W.; Townsend, Lawrence W.

    1988-01-01

    Linear and harmonic potential models are used in the nonrelativistic Schroedinger equation to obtain article mass spectra for mesons as bound states of quarks. The main emphasis is on the linear potential where exact solutions of the S-state eigenvalues and eigenfunctions and the asymptotic solution for the higher order partial wave are obtained. A study of the accuracy of two analytical energy level formulas as applied to heavy mesons is also included. Cornwall's formula is found to be particularly accurate and useful as a predictor of heavy quarkonium states. Exact solution for all partial waves of eigenvalues and eigenfunctions for a harmonic potential is also obtained and compared with the calculated discrete spectra of the linear potential. Detailed derivations of the eigenvalues and eigenfunctions of the linear and harmonic potentials are presented in appendixes.

  16. Accuracy of digital models generated by conventional impression/plaster-model methods and intraoral scanning.

    PubMed

    Tomita, Yuki; Uechi, Jun; Konno, Masahiro; Sasamoto, Saera; Iijima, Masahiro; Mizoguchi, Itaru

    2018-04-17

    We compared the accuracy of digital models generated by desktop-scanning of conventional impression/plaster models versus intraoral scanning. Eight ceramic spheres were attached to the buccal molar regions of dental epoxy models, and reference linear-distance measurement were determined using a contact-type coordinate measuring instrument. Alginate (AI group) and silicone (SI group) impressions were taken and converted into cast models using dental stone; the models were scanned using desktop scanner. As an alternative, intraoral scans were taken using an intraoral scanner, and digital models were generated from these scans (IOS group). Twelve linear-distance measurement combinations were calculated between different sphere-centers for all digital models. There were no significant differences among the three groups using total of six linear-distance measurements. When limited to five lineardistance measurement, the IOS group showed significantly higher accuracy compared to the AI and SI groups. Intraoral scans may be more accurate compared to scans of conventional impression/plaster models.

  17. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  18. Development of a Bolometer Detector System for the NIST High Accuracy Infrared Spectrophotometer

    PubMed Central

    Zong, Y.; Datla, R. U.

    1998-01-01

    A bolometer detector system was developed for the high accuracy infrared spectrophotometer at the National Institute of Standards and Technology to provide maximum sensitivity, spatial uniformity, and linearity of response covering the entire infrared spectral range. The spatial response variation was measured to be within 0.1 %. The linearity of the detector output was measured over three decades of input power. After applying a simple correction procedure, the detector output was found to deviate less than 0.2 % from linear behavior over this range. The noise equivalent power (NEP) of the bolometer system was 6 × 10−12 W/Hz at the frequency of 80 Hz. The detector output 3 dB roll-off frequency was 200 Hz. The detector output was stable to within ± 0.05 % over a 15 min period. These results demonstrate that the bolometer detector system will serve as an excellent detector for the high accuracy infrared spectrophotometer. PMID:28009364

  19. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    PubMed

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  20. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    PubMed Central

    Sánchez-Pérez, Inma; Ibern, Pere; Coderch, Jordi; Inoriza, José María

    2016-01-01

    Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain) for the year 2012 (N = 92,498 individuals). A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG) patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals. PMID:28316542

  1. [Relation between Body Height and Combined Length of Manubrium and Mesosternum of Sternum Measured by CT-VRT in Southwest Han Population].

    PubMed

    Luo, Ying-zhen; Tu, Meng; Fan, Fei; Zheng, Jie-qian; Yang, Ming; Li, Tao; Zhang, Kui; Deng, Zhen-hua

    2015-06-01

    To establish the linear regression equation between body height and combined length of manubrium and mesostenum of sternum measured by CT volume rendering technique (CT-VRT) in southwest Han population. One hundred and sixty subjects, including 80 males and 80 females were selected from southwest Han population for routine CT-VRT (reconstruction thickness 1 mm) examination. The lengths of both manubrium and mesosternum were recorded, and the combined length of manubrium and mesosternum was equal to the algebraic sum of them. The sex-specific linear regression equations between the combined length of manubrium and mesosternum and the real body height of each subject were deduced. The sex-specific simple linear regression equations between the combined length of manubrium and mesostenum (x3) and body height (y) were established (male: y = 135.000+2.118 x3 and female: y = 120.790+2.808 x3). Both equations showed statistical significance (P < 0.05) with a 100% predictive accuracy. CT-VRT is an effective method for measurement of the index of sternum. The combined length of manubrium and mesosternum from CT-VRT can be used for body height estimation in southwest Han population.

  2. Use of AMMI and linear regression models to analyze genotype-environment interaction in durum wheat.

    PubMed

    Nachit, M M; Nachit, G; Ketata, H; Gauch, H G; Zobel, R W

    1992-03-01

    The joint durum wheat (Triticum turgidum L var 'durum') breeding program of the International Maize and Wheat Improvement Center (CIMMYT) and the International Center for Agricultural Research in the Dry Areas (ICARDA) for the Mediterranean region employs extensive multilocation testing. Multilocation testing produces significant genotype-environment (GE) interaction that reduces the accuracy for estimating yield and selecting appropriate germ plasm. The sum of squares (SS) of GE interaction was partitioned by linear regression techniques into joint, genotypic, and environmental regressions, and by Additive Main effects and the Multiplicative Interactions (AMMI) model into five significant Interaction Principal Component Axes (IPCA). The AMMI model was more effective in partitioning the interaction SS than the linear regression technique. The SS contained in the AMMI model was 6 times higher than the SS for all three regressions. Postdictive assessment recommended the use of the first five IPCA axes, while predictive assessment AMMI1 (main effects plus IPCA1). After elimination of random variation, AMMI1 estimates for genotypic yields within sites were more precise than unadjusted means. This increased precision was equivalent to increasing the number of replications by a factor of 3.7.

  3. Validated spectrofluorimetric method for the determination of tamsulosin in spiked human urine, pure and pharmaceutical preparations.

    PubMed

    Karasakal, A; Ulu, S T

    2014-05-01

    A novel, sensitive and selective spectrofluorimetric method was developed for the determination of tamsulosin in spiked human urine and pharmaceutical preparations. The proposed method is based on the reaction of tamsulosin with 1-dimethylaminonaphthalene-5-sulfonyl chloride in carbonate buffer pH 10.5 to yield a highly fluorescent derivative. The described method was validated and the analytical parameters of linearity, limit of detection (LOD), limit of quantification (LOQ), accuracy, precision, recovery and robustness were evaluated. The proposed method showed a linear dependence of the fluorescence intensity on drug concentration over the range 1.22 × 10(-7) to 7.35 × 10(-6)  M. LOD and LOQ were calculated as 1.07 × 10(-7) and 3.23 × 10(-7)  M, respectively. The proposed method was successfully applied for the determination of tamsulosin in pharmaceutical preparations and the obtained results were in good agreement with those obtained using the reference method. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Reversed phase HPLC for strontium ranelate: Method development and validation applying experimental design.

    PubMed

    Kovács, Béla; Kántor, Lajos Kristóf; Croitoru, Mircea Dumitru; Kelemen, Éva Katalin; Obreja, Mona; Nagy, Előd Ernő; Székely-Szentmiklósi, Blanka; Gyéresi, Árpád

    2018-06-01

    A reverse-phase HPLC (RP-HPLC) method was developed for strontium ranelate using a full factorial, screening experimental design. The analytical procedure was validated according to international guidelines for linearity, selectivity, sensitivity, accuracy and precision. A separate experimental design was used to demonstrate the robustness of the method. Strontium ranelate was eluted at 4.4 minutes and showed no interference with the excipients used in the formulation, at 321 nm. The method is linear in the range of 20-320 μg mL-1 (R2 = 0.99998). Recovery, tested in the range of 40-120 μg mL-1, was found to be 96.1-102.1 %. Intra-day and intermediate precision RSDs ranged from 1.0-1.4 and 1.2-1.4 %, resp. The limit of detection and limit of quantitation were 0.06 and 0.20 μg mL-1, resp. The proposed technique is fast, cost-effective, reliable and reproducible, and is proposed for the routine analysis of strontium ranelate.

  5. Development and Validation of RP-LC Method for the Determination of Cinnarizine/Piracetam and Cinnarizine/Heptaminol Acefyllinate in Presence of Cinnarizine Reported Degradation Products

    PubMed Central

    EL-Houssini, Ola M.; Zawilla, Nagwan H.; Mohammad, Mohammad A.

    2013-01-01

    Specific stability indicating reverse-phase liquid chromatography (RP-LC) assay method (SIAM) was developed for the determination of cinnarizine (Cinn)/piracetam (Pira) and cinnarizine (Cinn)/heptaminol acefyllinate (Hept) in the presence of the reported degradation products of Cinn. A C18 column and gradient mobile phase was applied for good resolution of all peaks. The detection was achieved at 210 nm and 254 nm for Cinn/Pira and Cinn/Hept, respectively. The responses were linear over concentration ranges of 20–200, 20–1000 and 25–1000 μgmL−1 for Cinn, Pira, and Hept respectively. The proposed method was validated for linearity, accuracy, repeatability, intermediate precision, and robustness via statistical analysis of the data. The method was shown to be precise, accurate, reproducible, sensitive, and selective for the analysis of Cinn/Pira and Cinn/Hept in laboratory prepared mixtures and in pharmaceutical formulations. PMID:24137049

  6. A linear recurrent kernel online learning algorithm with sparse updates.

    PubMed

    Fan, Haijin; Song, Qing

    2014-02-01

    In this paper, we propose a recurrent kernel algorithm with selectively sparse updates for online learning. The algorithm introduces a linear recurrent term in the estimation of the current output. This makes the past information reusable for updating of the algorithm in the form of a recurrent gradient term. To ensure that the reuse of this recurrent gradient indeed accelerates the convergence speed, a novel hybrid recurrent training is proposed to switch on or off learning the recurrent information according to the magnitude of the current training error. Furthermore, the algorithm includes a data-dependent adaptive learning rate which can provide guaranteed system weight convergence at each training iteration. The learning rate is set as zero when the training violates the derived convergence conditions, which makes the algorithm updating process sparse. Theoretical analyses of the weight convergence are presented and experimental results show the good performance of the proposed algorithm in terms of convergence speed and estimation accuracy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Selection of a Geostatistical Method to Interpolate Soil Properties of the State Crop Testing Fields using Attributes of a Digital Terrain Model

    NASA Astrophysics Data System (ADS)

    Sahabiev, I. A.; Ryazanov, S. S.; Kolcova, T. G.; Grigoryan, B. R.

    2018-03-01

    The three most common techniques to interpolate soil properties at a field scale—ordinary kriging (OK), regression kriging with multiple linear regression drift model (RK + MLR), and regression kriging with principal component regression drift model (RK + PCR)—were examined. The results of the performed study were compiled into an algorithm of choosing the most appropriate soil mapping technique. Relief attributes were used as the auxiliary variables. When spatial dependence of a target variable was strong, the OK method showed more accurate interpolation results, and the inclusion of the auxiliary data resulted in an insignificant improvement in prediction accuracy. According to the algorithm, the RK + PCR method effectively eliminates multicollinearity of explanatory variables. However, if the number of predictors is less than ten, the probability of multicollinearity is reduced, and application of the PCR becomes irrational. In that case, the multiple linear regression should be used instead.

  8. Transition properties from the Hermitian formulation of the coupled cluster polarization propagator

    NASA Astrophysics Data System (ADS)

    Tucholska, Aleksandra M.; Modrzejewski, Marcin; Moszynski, Robert

    2014-09-01

    Theory of one-electron transition density matrices has been formulated within the time-independent coupled cluster method for the polarization propagator [R. Moszynski, P. S. Żuchowski, and B. Jeziorski, Coll. Czech. Chem. Commun. 70, 1109 (2005)]. Working expressions have been obtained and implemented with the coupled cluster method limited to single, double, and linear triple excitations (CC3). Selected dipole and quadrupole transition probabilities of the alkali earth atoms, computed with the new transition density matrices are compared to the experimental data. Good agreement between theory and experiment is found. The results obtained with the new approach are of the same quality as the results obtained with the linear response coupled cluster theory. The one-electron density matrices for the ground state in the CC3 approximation have also been implemented. The dipole moments for a few representative diatomic molecules have been computed with several variants of the new approach, and the results are discussed to choose the approximation with the best balance between the accuracy and computational efficiency.

  9. Implementing DBS methodology for the determination of Compound A in monkey blood: GLP method validation and investigation of the impact of blood spreading on performance.

    PubMed

    Fan, Leimin; Lee, Jacob; Hall, Jeffrey; Tolentino, Edward J; Wu, Huaiqin; El-Shourbagy, Tawakol

    2011-06-01

    This article describes validation work for analysis of an Abbott investigational drug (Compound A) in monkey whole blood with dried blood spots (DBS). The impact of DBS spotting volume on analyte concentration was investigated. The quantitation range was between 30.5 and 10,200 ng/ml. Accuracy and precision of quality controls, linearity of calibration curves, matrix effect, selectivity, dilution, recovery and multiple stabilities were evaluated in the validation, and all demonstrated acceptable results. Incurred sample reanalysis was performed with 57 out of 58 samples having a percentage difference (versus the mean value) less than 20%. A linear relationship between the spotting volume and the spot area was drawn. The influence of spotting volume on concentration was discussed. All validation results met good laboratory practice acceptance requirements. Radial spreading of blood on DBS cards can be a factor in DBS concentrations at smaller spotting volumes.

  10. Combining markers with and without the limit of detection

    PubMed Central

    Dong, Ting; Liu, Catherine Chunling; Petricoin, Emanuel F.; Tang, Liansheng Larry

    2014-01-01

    In this paper, we consider the combination of markers with and without the limit of detection (LOD). LOD is often encountered when measuring proteomic markers. Because of the limited detecting ability of an equipment or instrument, it is difficult to measure markers at a relatively low level. Suppose that after some monotonic transformation, the marker values approximately follow multivariate normal distributions. We propose to estimate distribution parameters while taking the LOD into account, and then combine markers using the results from the linear discriminant analysis. Our simulation results show that the ROC curve parameter estimates generated from the proposed method are much closer to the truth than simply using the linear discriminant analysis to combine markers without considering the LOD. In addition, we propose a procedure to select and combine a subset of markers when many candidate markers are available. The procedure based on the correlation among markers is different from a common understanding that a subset of the most accurate markers should be selected for the combination. The simulation studies show that the accuracy of a combined marker can be largely impacted by the correlation of marker measurements. Our methods are applied to a protein pathway dataset to combine proteomic biomarkers to distinguish cancer patients from non-cancer patients. PMID:24132938

  11. Predicting discovery rates of genomic features.

    PubMed

    Gravel, Simon

    2014-06-01

    Successful sequencing experiments require judicious sample selection. However, this selection must often be performed on the basis of limited preliminary data. Predicting the statistical properties of the final sample based on preliminary data can be challenging, because numerous uncertain model assumptions may be involved. Here, we ask whether we can predict "omics" variation across many samples by sequencing only a fraction of them. In the infinite-genome limit, we find that a pilot study sequencing 5% of a population is sufficient to predict the number of genetic variants in the entire population within 6% of the correct value, using an estimator agnostic to demography, selection, or population structure. To reach similar accuracy in a finite genome with millions of polymorphisms, the pilot study would require ∼15% of the population. We present computationally efficient jackknife and linear programming methods that exhibit substantially less bias than the state of the art when applied to simulated data and subsampled 1000 Genomes Project data. Extrapolating based on the National Heart, Lung, and Blood Institute Exome Sequencing Project data, we predict that 7.2% of sites in the capture region would be variable in a sample of 50,000 African Americans and 8.8% in a European sample of equal size. Finally, we show how the linear programming method can also predict discovery rates of various genomic features, such as the number of transcription factor binding sites across different cell types. Copyright © 2014 by the Genetics Society of America.

  12. Validated semiquantitative/quantitative screening of 51 drugs in whole blood as silylated derivatives by gas chromatography-selected ion monitoring mass spectrometry and gas chromatography electron capture detection.

    PubMed

    Gunnar, Teemu; Mykkänen, Sirpa; Ariniemi, Kari; Lillsunde, Pirjo

    2004-07-05

    A comprehensively validated procedure is presented for simultaneous semiquantitative/quantitative screening of 51 drugs of abuse or drugs potentially hazardous for traffic safety in serum, plasma or whole blood. Benzodiazepines (12), cannabinoids (3), opioids (8), cocaine, antidepressants (13), antipsychotics (5) and antiepileptics (2) as well as zolpidem, zaleplon, zopiclone, meprobamate, carisoprodol, tizanidine and orphenadrine and internal standard flurazepam, were isolated by high-yield liquid-liquid extraction (LLE). The dried extracts were derivatized by two-step silylation and analyzed by the combination of two different gas chromatographic (GC) separations with both electron capture detection (ECD) and mass spectrometry (MS) operating in a selected ion-monitoring (SIM) mode. Quantitative or semiquantitative results were obtained for each substance based on four-point calibration. In the validation tests, accuracy, reproducibility, linearity, limit of detection (LOD) and limit of quantitation (LOQ), selectivity, as well as extraction efficiency and stability of standard stock solutions were tested, and derivatization was optimized in detail. Intra- and inter-day precisions were within 2.5-21.8 and 6.0-22.5%, and square of correlation coefficients of linearity ranged from 0.9896 to 0.9999. The limit of quantitation (LOQ) varied from 2 to 2000 ng/ml due to a variety of the relevant concentrations of the analyzed substances in blood. The method is feasible for highly sensitive, reliable and possibly routinely performed clinical and forensic toxicological analyses.

  13. A 1.3-μm four-channel directly modulated laser array fabricated by SAG-Upper-SCH technology

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Lu, Dan; Zhang, Ruikang; Liu, Songtao; Sun, Mengdie; Kan, Qiang; Ji, Chen

    2017-01-01

    A monolithically integrated four-channel directly modulated laser (DML) array working at the 1.3-μm band is demonstrated. The laser was manufactured by using the techniques of selective area growth (SAG) of the upper separate confinement heterostructure (Upper-SCH) and modified butt-joint method. The fabricated device showed stable single mode operation with the side mode suppression ratio (SMSR) >35 dB, and high wavelength accuracy with the deviations from the linear fitted values <±0.03 nm for all channels. Furthermore, small signal modulation bandwidth >7 GHz was obtained, which may be suitable for 40 GbE applications in the 1.3-μm band.

  14. A multiscale decomposition approach to detect abnormal vasculature in the optic disc.

    PubMed

    Agurto, Carla; Yu, Honggang; Murray, Victor; Pattichis, Marios S; Nemeth, Sheila; Barriga, Simon; Soliz, Peter

    2015-07-01

    This paper presents a multiscale method to detect neovascularization in the optic disc (NVD) using fundus images. Our method is applied to a manually selected region of interest (ROI) containing the optic disc. All the vessels in the ROI are segmented by adaptively combining contrast enhancement methods with a vessel segmentation technique. Textural features extracted using multiscale amplitude-modulation frequency-modulation, morphological granulometry, and fractal dimension are used. A linear SVM is used to perform the classification, which is tested by means of 10-fold cross-validation. The performance is evaluated using 300 images achieving an AUC of 0.93 with maximum accuracy of 88%. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Determining suitable lego-structures to estimate stability of larger peptide nanostructures using computational methods.

    PubMed

    Beke, Tamás; Czajlik, András; Csizmadia, Imre G; Perczel, András

    2006-02-02

    Nanofibers, nanofilms and nanotubes constructed of one to four strands of oligo-alpha- and oligo-beta-peptides were obtained by using carefully selected building units. Lego-type approaches based on thermoneutral isodesmic reactions can be used to reconstruct the total energies of both linear and tubular periodic nanostructures with acceptable accuracy. Total energies of several different nanostructures were accurately determined with errors typically falling in the subchemical range. Thus, attention will be focused on the description of suitable isodesmic reactions that have enabled the determination of the total energy of polypeptides and therefore offer a very fast, efficient and accurate method to obtain energetic information on large and even very large nanosystems.

  16. Simple and Sensitive Paper-Based Device Coupling Electrochemical Sample Pretreatment and Colorimetric Detection.

    PubMed

    Silva, Thalita G; de Araujo, William R; Muñoz, Rodrigo A A; Richter, Eduardo M; Santana, Mário H P; Coltro, Wendell K T; Paixão, Thiago R L C

    2016-05-17

    We report the development of a simple, portable, low-cost, high-throughput visual colorimetric paper-based analytical device for the detection of procaine in seized cocaine samples. The interference of most common cutting agents found in cocaine samples was verified, and a novel electrochemical approach was used for sample pretreatment in order to increase the selectivity. Under the optimized experimental conditions, a linear analytical curve was obtained for procaine concentrations ranging from 5 to 60 μmol L(-1), with a detection limit of 0.9 μmol L(-1). The accuracy of the proposed method was evaluated using seized cocaine samples and an addition and recovery protocol.

  17. On real-space Density Functional Theory for non-orthogonal crystal systems: Kronecker product formulation of the kinetic energy operator

    NASA Astrophysics Data System (ADS)

    Sharma, Abhiraj; Suryanarayana, Phanish

    2018-05-01

    We present an accurate and efficient real-space Density Functional Theory (DFT) framework for the ab initio study of non-orthogonal crystal systems. Specifically, employing a local reformulation of the electrostatics, we develop a novel Kronecker product formulation of the real-space kinetic energy operator that significantly reduces the number of operations associated with the Laplacian-vector multiplication, the dominant cost in practical computations. In particular, we reduce the scaling with respect to finite-difference order from quadratic to linear, thereby significantly bridging the gap in computational cost between non-orthogonal and orthogonal systems. We verify the accuracy and efficiency of the proposed methodology through selected examples.

  18. Oxygen measurement by multimode diode lasers employing gas correlation spectroscopy.

    PubMed

    Lou, Xiutao; Somesfalean, Gabriel; Chen, Bin; Zhang, Zhiguo

    2009-02-10

    Multimode diode laser (MDL)-based correlation spectroscopy (COSPEC) was used to measure oxygen in ambient air, thereby employing a diode laser (DL) having an emission spectrum that overlaps the oxygen absorption lines of the A band. A sensitivity of 700 ppm m was achieved with good accuracy (2%) and linearity (R(2)=0.999). For comparison, measurements of ambient oxygen were also performed by tunable DL absorption spectroscopy (TDLAS) technique employing a vertical cavity surface emitting laser. We demonstrate that, despite slightly degraded sensitivity, the MDL-based COSPEC-based oxygen sensor has the advantages of high stability, low cost, ease-of-use, and relaxed requirements in component selection and instrument buildup compared with the TDLAS-based instrument.

  19. Achieving a Linear Dose Rate Response in Pulse-Mode Silicon Photodiode Scintillation Detectors Over a Wide Range of Excitations

    NASA Astrophysics Data System (ADS)

    Carroll, Lewis

    2014-02-01

    We are developing a new dose calibrator for nuclear pharmacies that can measure radioactivity in a vial or syringe without handling it directly or removing it from its transport shield “pig”. The calibrator's detector comprises twin opposing scintillating crystals coupled to Si photodiodes and current-amplifying trans-resistance amplifiers. Such a scheme is inherently linear with respect to dose rate over a wide range of radiation intensities, but accuracy at low activity levels may be impaired, beyond the effects of meager photon statistics, by baseline fluctuation and drift inevitably present in high-gain, current-mode photodiode amplifiers. The work described here is motivated by our desire to enhance accuracy at low excitations while maintaining linearity at high excitations. Thus, we are also evaluating a novel “pulse-mode” analog signal processing scheme that employs a linear threshold discriminator to virtually eliminate baseline fluctuation and drift. We will show the results of a side-by-side comparison of current-mode versus pulse-mode signal processing schemes, including perturbing factors affecting linearity and accuracy at very low and very high excitations. Bench testing over a wide range of excitations is done using a Poisson random pulse generator plus an LED light source to simulate excitations up to ˜106 detected counts per second without the need to handle and store large amounts of radioactive material.

  20. An electrocorticographic BCI using code-based VEP for control in video applications: a single-subject study

    PubMed Central

    Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph

    2014-01-01

    A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed. PMID:25147509

  1. Improved Short-Term Clock Prediction Method for Real-Time Positioning.

    PubMed

    Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan

    2017-06-06

    The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.

  2. Gene-Based Multiclass Cancer Diagnosis with Class-Selective Rejections

    PubMed Central

    Jrad, Nisrine; Grall-Maës, Edith; Beauseroy, Pierre

    2009-01-01

    Supervised learning of microarray data is receiving much attention in recent years. Multiclass cancer diagnosis, based on selected gene profiles, are used as adjunct of clinical diagnosis. However, supervised diagnosis may hinder patient care, add expense or confound a result. To avoid this misleading, a multiclass cancer diagnosis with class-selective rejection is proposed. It rejects some patients from one, some, or all classes in order to ensure a higher reliability while reducing time and expense costs. Moreover, this classifier takes into account asymmetric penalties dependant on each class and on each wrong or partially correct decision. It is based on ν-1-SVM coupled with its regularization path and minimizes a general loss function defined in the class-selective rejection scheme. The state of art multiclass algorithms can be considered as a particular case of the proposed algorithm where the number of decisions is given by the classes and the loss function is defined by the Bayesian risk. Two experiments are carried out in the Bayesian and the class selective rejection frameworks. Five genes selected datasets are used to assess the performance of the proposed method. Results are discussed and accuracies are compared with those computed by the Naive Bayes, Nearest Neighbor, Linear Perceptron, Multilayer Perceptron, and Support Vector Machines classifiers. PMID:19584932

  3. Sex Assessment from the Volume of the First Metatarsal Bone: A Comparison of Linear and Volume Measurements.

    PubMed

    Gibelli, Daniele; Poppa, Pasquale; Cummaudo, Marco; Mattia, Mirko; Cappella, Annalisa; Mazzarelli, Debora; Zago, Matteo; Sforza, Chiarella; Cattaneo, Cristina

    2017-11-01

    Sexual dimorphism is a crucial characteristic of skeleton. In the last years, volumetric and surface 3D acquisition systems have enabled anthropologists to assess surfaces and volumes, whose potential still needs to be verified. This article aimed at assessing volume and linear parameters of the first metatarsal bone through 3D acquisition by laser scanning. Sixty-eight skeletons underwent 3D scan through laser scanner: Seven linear measurements and volume from each bone were assessed. A cutoff value of 13,370 mm 3 was found, with an accuracy of 80.8%. Linear measurements outperformed volume: metatarsal length and mediolateral width of base showed higher cross-validated accuracies (respectively, 82.1% and 79.1%, raising at 83.6% when both of them were included). Further studies are needed to verify the real advantage for sex assessment provided by volume measurements. © 2017 American Academy of Forensic Sciences.

  4. Linear scaling computation of the Fock matrix. II. Rigorous bounds on exchange integrals and incremental Fock build

    NASA Astrophysics Data System (ADS)

    Schwegler, Eric; Challacombe, Matt; Head-Gordon, Martin

    1997-06-01

    A new linear scaling method for computation of the Cartesian Gaussian-based Hartree-Fock exchange matrix is described, which employs a method numerically equivalent to standard direct SCF, and which does not enforce locality of the density matrix. With a previously described method for computing the Coulomb matrix [J. Chem. Phys. 106, 5526 (1997)], linear scaling incremental Fock builds are demonstrated for the first time. Microhartree accuracy and linear scaling are achieved for restricted Hartree-Fock calculations on sequences of water clusters and polyglycine α-helices with the 3-21G and 6-31G basis sets. Eightfold speedups are found relative to our previous method. For systems with a small ionization potential, such as graphitic sheets, the method naturally reverts to the expected quadratic behavior. Also, benchmark 3-21G calculations attaining microhartree accuracy are reported for the P53 tetramerization monomer involving 698 atoms and 3836 basis functions.

  5. Prospects and Potential Uses of Genomic Prediction of Key Performance Traits in Tetraploid Potato.

    PubMed

    Stich, Benjamin; Van Inghelandt, Delphine

    2018-01-01

    Genomic prediction is a routine tool in breeding programs of most major animal and plant species. However, its usefulness for potato breeding has not yet been evaluated in detail. The objectives of this study were to (i) examine the prospects of genomic prediction of key performance traits in a diversity panel of tetraploid potato modeling additive, dominance, and epistatic effects, (ii) investigate the effects of size and make up of training set, number of test environments and molecular markers on prediction accuracy, and (iii) assess the effect of including markers from candidate genes on the prediction accuracy. With genomic best linear unbiased prediction (GBLUP), BayesA, BayesCπ, and Bayesian LASSO, four different prediction methods were used for genomic prediction of relative area under disease progress curve after a Phytophthora infestans infection, plant maturity, maturity corrected resistance, tuber starch content, tuber starch yield (TSY), and tuber yield (TY) of 184 tetraploid potato clones or subsets thereof genotyped with the SolCAP 8.3k SNP array. The cross-validated prediction accuracies with GBLUP and the three Bayesian approaches for the six evaluated traits ranged from about 0.5 to about 0.8. For traits with a high expected genetic complexity, such as TSY and TY, we observed an 8% higher prediction accuracy using a model with additive and dominance effects compared with a model with additive effects only. Our results suggest that for oligogenic traits in general and when diagnostic markers are available in particular, the use of Bayesian methods for genomic prediction is highly recommended and that the diagnostic markers should be modeled as fixed effects. The evaluation of the relative performance of genomic prediction vs. phenotypic selection indicated that the former is superior, assuming cycle lengths and selection intensities that are possible to realize in commercial potato breeding programs.

  6. Prospects and Potential Uses of Genomic Prediction of Key Performance Traits in Tetraploid Potato

    PubMed Central

    Stich, Benjamin; Van Inghelandt, Delphine

    2018-01-01

    Genomic prediction is a routine tool in breeding programs of most major animal and plant species. However, its usefulness for potato breeding has not yet been evaluated in detail. The objectives of this study were to (i) examine the prospects of genomic prediction of key performance traits in a diversity panel of tetraploid potato modeling additive, dominance, and epistatic effects, (ii) investigate the effects of size and make up of training set, number of test environments and molecular markers on prediction accuracy, and (iii) assess the effect of including markers from candidate genes on the prediction accuracy. With genomic best linear unbiased prediction (GBLUP), BayesA, BayesCπ, and Bayesian LASSO, four different prediction methods were used for genomic prediction of relative area under disease progress curve after a Phytophthora infestans infection, plant maturity, maturity corrected resistance, tuber starch content, tuber starch yield (TSY), and tuber yield (TY) of 184 tetraploid potato clones or subsets thereof genotyped with the SolCAP 8.3k SNP array. The cross-validated prediction accuracies with GBLUP and the three Bayesian approaches for the six evaluated traits ranged from about 0.5 to about 0.8. For traits with a high expected genetic complexity, such as TSY and TY, we observed an 8% higher prediction accuracy using a model with additive and dominance effects compared with a model with additive effects only. Our results suggest that for oligogenic traits in general and when diagnostic markers are available in particular, the use of Bayesian methods for genomic prediction is highly recommended and that the diagnostic markers should be modeled as fixed effects. The evaluation of the relative performance of genomic prediction vs. phenotypic selection indicated that the former is superior, assuming cycle lengths and selection intensities that are possible to realize in commercial potato breeding programs. PMID:29563919

  7. Application of linear regression analysis in accuracy assessment of rolling force calculations

    NASA Astrophysics Data System (ADS)

    Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.

    1998-10-01

    Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.

  8. A Secondary Antibody-Detecting Molecular Weight Marker with Mouse and Rabbit IgG Fc Linear Epitopes for Western Blot Analysis

    PubMed Central

    Cheng, Ta-Chun; Tung, Yi-Ching; Chu, Pei-Yu; Chuang, Chih-Hung; Hsieh, Yuan-Chin; Huang, Chien-Chiao; Wang, Yeng-Tseng; Kao, Chien-Han; Roffler, Steve R.; Cheng, Tian-Lu

    2016-01-01

    Molecular weight markers that can tolerate denaturing conditions and be auto-detected by secondary antibodies offer great efficacy and convenience for Western Blotting. Here, we describe M&R LE protein markers which contain linear epitopes derived from the heavy chain constant regions of mouse and rabbit immunoglobulin G (IgG Fc LE). These markers can be directly recognized and stained by a wide range of anti-mouse and anti-rabbit secondary antibodies. We selected three mouse (M1, M2 and M3) linear IgG1 and three rabbit (R1, R2 and R3) linear IgG heavy chain epitope candidates based on their respective crystal structures. Western blot analysis indicated that M2 and R2 linear epitopes are effectively recognized by anti-mouse and anti-rabbit secondary antibodies, respectively. We fused the M2 and R2 epitopes (M&R LE) and incorporated the polypeptide in a range of 15–120 kDa auto-detecting markers (M&R LE protein marker). The M&R LE protein marker can be auto-detected by anti-mouse and anti-rabbit IgG secondary antibodies in standard immunoblots. Linear regression analysis of the M&R LE protein marker plotted as gel mobility versus the log of the marker molecular weights revealed good linearity with a correlation coefficient R2 value of 0.9965, indicating that the M&R LE protein marker displays high accuracy for determining protein molecular weights. This accurate, regular and auto-detected M&R LE protein marker may provide a simple, efficient and economical tool for protein analysis. PMID:27494183

  9. A Secondary Antibody-Detecting Molecular Weight Marker with Mouse and Rabbit IgG Fc Linear Epitopes for Western Blot Analysis.

    PubMed

    Lin, Wen-Wei; Chen, I-Ju; Cheng, Ta-Chun; Tung, Yi-Ching; Chu, Pei-Yu; Chuang, Chih-Hung; Hsieh, Yuan-Chin; Huang, Chien-Chiao; Wang, Yeng-Tseng; Kao, Chien-Han; Roffler, Steve R; Cheng, Tian-Lu

    2016-01-01

    Molecular weight markers that can tolerate denaturing conditions and be auto-detected by secondary antibodies offer great efficacy and convenience for Western Blotting. Here, we describe M&R LE protein markers which contain linear epitopes derived from the heavy chain constant regions of mouse and rabbit immunoglobulin G (IgG Fc LE). These markers can be directly recognized and stained by a wide range of anti-mouse and anti-rabbit secondary antibodies. We selected three mouse (M1, M2 and M3) linear IgG1 and three rabbit (R1, R2 and R3) linear IgG heavy chain epitope candidates based on their respective crystal structures. Western blot analysis indicated that M2 and R2 linear epitopes are effectively recognized by anti-mouse and anti-rabbit secondary antibodies, respectively. We fused the M2 and R2 epitopes (M&R LE) and incorporated the polypeptide in a range of 15-120 kDa auto-detecting markers (M&R LE protein marker). The M&R LE protein marker can be auto-detected by anti-mouse and anti-rabbit IgG secondary antibodies in standard immunoblots. Linear regression analysis of the M&R LE protein marker plotted as gel mobility versus the log of the marker molecular weights revealed good linearity with a correlation coefficient R2 value of 0.9965, indicating that the M&R LE protein marker displays high accuracy for determining protein molecular weights. This accurate, regular and auto-detected M&R LE protein marker may provide a simple, efficient and economical tool for protein analysis.

  10. Determination of a novel nonfluorinated quinolone, nemonoxacin, in human feces and its glucuronide conjugate in human urine and feces by high-performance liquid chromatography-triple quadrupole mass spectrometry.

    PubMed

    He, Gaoli; Guo, Beining; Yu, Jicheng; Zhang, Jing; Wu, Xiaojie; Cao, Guoying; Shi, Yaoguo; Tsai, Cheng-Yuan

    2015-05-01

    Three methods were developed and validated for determination of nemonoxacin in human feces and its major metabolite, nemonoxacin acyl-β- d-glucuronide, in human urine and feces. Nemonoxacin was extracted by liquid-liquid extraction in feces homogenate samples and nemonoxacin acyl-β- d-glucuronide by a solid-phase extraction procedure for pretreatment of both urine and feces homogenate sample. Separation was performed on a C18 reversed-phase column under isocratic elution with the mobile phase consisting of acetonitrile and 0.1% formic acid. Both analytes were determined by liquid chromatography-tandem mass spectrometry with positive electrospray ionization in selected reaction monitoring mode and gatifloxacin as the internal standard. The lower limit of quantitation (LLOQ) of nemonoxacin in feces was 0.12 µg/g and the calibration curve was linear in the concentration range of 0.12-48.00 µg/g. The LLOQ of the metabolite was 0.0010 µg/mL and 0.03 µg/g in urine and feces matrices, while the linear range was 0.0010-0.2000 µg/mL and 0.03-3.00 µg/g, respectively. Validation included selectivity, accuracy, precision, linearity, recovery, matrix effect, carryover, dilution integrity and stability, indicating that the methods can quantify the corresponding analytes with excellent reliability. The validated methods were successfully applied to an absolute bioavailability clinical study of nemonoxacin malate capsule. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    PubMed Central

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo

    2016-01-01

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970

  12. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo

    2017-01-05

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.

  13. A New Rapid and Sensitive Stability-Indicating UPLC Assay Method for Tolterodine Tartrate: Application in Pharmaceuticals, Human Plasma and Urine Samples.

    PubMed

    Yanamandra, Ramesh; Vadla, Chandra Sekhar; Puppala, Umamaheshwar; Patro, Balaram; Murthy, Yellajyosula L N; Ramaiah, Parimi Atchuta

    2012-01-01

    A new rapid, simple, sensitive, selective and accurate reversed-phase stability-indicating Ultra Performance Liquid Chromatography (RP-UPLC) technique was developed for the assay of Tolterodine Tartrate in pharmaceutical dosage form, human plasma and urine samples. The developed UPLC method is superior in technology to conventional HPLC with respect to speed, solvent consumption, resolution and cost of analysis. Chromatographic run time was 6 min in reversed-phase mode and ultraviolet detection was carried out at 220 nm for quantification. Efficient separation was achieved for all the degradants of Tolterodine Tartrate on BEH C18 sub-2-μm Acquity UPLC column using Trifluoroacetic acid and acetonitrile as organic solvent in a linear gradient program. The active pharmaceutical ingredient was extracted from tablet dosage form using a mixture of acetonitrile and water as diluent. The calibration graphs were linear and the method showed excellent recoveries for bulk and tablet dosage form. The test solution was found to be stable for 40 days when stored in the refrigerator between 2 and 8 °C. The developed UPLC method was validated and meets the requirements delineated by the International Conference on Harmonization (ICH) guidelines with respect to linearity, accuracy, precision, specificity and robustness. The intra-day and inter-day variation was found be less than 1%. The method was reproducible and selective for the estimation of Tolterodine Tartrate. Because the method could effectively separate the drug from its degradation products, it can be employed as a stability-indicating one.

  14. A New Rapid and Sensitive Stability-Indicating UPLC Assay Method for Tolterodine Tartrate: Application in Pharmaceuticals, Human Plasma and Urine Samples

    PubMed Central

    Yanamandra, Ramesh; Vadla, Chandra Sekhar; Puppala, Umamaheshwar; Patro, Balaram; Murthy, Yellajyosula. L. N.; Ramaiah, Parimi Atchuta

    2012-01-01

    A new rapid, simple, sensitive, selective and accurate reversed-phase stability-indicating Ultra Performance Liquid Chromatography (RP-UPLC) technique was developed for the assay of Tolterodine Tartrate in pharmaceutical dosage form, human plasma and urine samples. The developed UPLC method is superior in technology to conventional HPLC with respect to speed, solvent consumption, resolution and cost of analysis. Chromatographic run time was 6 min in reversed-phase mode and ultraviolet detection was carried out at 220 nm for quantification. Efficient separation was achieved for all the degradants of Tolterodine Tartrate on BEH C18 sub-2-μm Acquity UPLC column using Trifluoroacetic acid and acetonitrile as organic solvent in a linear gradient program. The active pharmaceutical ingredient was extracted from tablet dosage form using a mixture of acetonitrile and water as diluent. The calibration graphs were linear and the method showed excellent recoveries for bulk and tablet dosage form. The test solution was found to be stable for 40 days when stored in the refrigerator between 2 and 8 °C. The developed UPLC method was validated and meets the requirements delineated by the International Conference on Harmonization (ICH) guidelines with respect to linearity, accuracy, precision, specificity and robustness. The intra-day and inter-day variation was found be less than 1%. The method was reproducible and selective for the estimation of Tolterodine Tartrate. Because the method could effectively separate the drug from its degradation products, it can be employed as a stability-indicating one. PMID:22396907

  15. A Hybrid Brain-Computer Interface Based on the Fusion of P300 and SSVEP Scores.

    PubMed

    Yin, Erwei; Zeyl, Timothy; Saab, Rami; Chau, Tom; Hu, Dewen; Zhou, Zongtan

    2015-07-01

    The present study proposes a hybrid brain-computer interface (BCI) with 64 selectable items based on the fusion of P300 and steady-state visually evoked potential (SSVEP) brain signals. With this approach, row/column (RC) P300 and two-step SSVEP paradigms were integrated to create two hybrid paradigms, which we denote as the double RC (DRC) and 4-D spellers. In each hybrid paradigm, the target is simultaneously detected based on both P300 and SSVEP potentials as measured by the electroencephalogram. We further proposed a maximum-probability estimation (MPE) fusion approach to combine the P300 and SSVEP on a score level and compared this approach to other approaches based on linear discriminant analysis, a naïve Bayes classifier, and support vector machines. The experimental results obtained from thirteen participants indicated that the 4-D hybrid paradigm outperformed the DRC paradigm and that the MPE fusion achieved higher accuracy compared with the other approaches. Importantly, 12 of the 13 participants, using the 4-D paradigm achieved an accuracy of over 90% and the average accuracy was 95.18%. These promising results suggest that the proposed hybrid BCI system could be used in the design of a high-performance BCI-based keyboard.

  16. Diagnostic potential of Raman spectroscopy in Barrett's esophagus

    NASA Astrophysics Data System (ADS)

    Wong Kee Song, Louis-Michel; Molckovsky, Andrea; Wang, Kenneth K.; Burgart, Lawrence J.; Dolenko, Brion; Somorjai, Rajmund L.; Wilson, Brian C.

    2005-04-01

    Patients with Barrett's esophagus (BE) undergo periodic endoscopic surveillance with random biopsies in an effort to detect dysplastic or early cancerous lesions. Surveillance may be enhanced by near-infrared Raman spectroscopy (NIRS), which has the potential to identify endoscopically-occult dysplastic lesions within the Barrett's segment and allow for targeted biopsies. The aim of this study was to assess the diagnostic performance of NIRS for identifying dysplastic lesions in BE in vivo. Raman spectra (Pexc=70 mW; t=5 s) were collected from Barrett's mucosa at endoscopy using a custom-built NIRS system (λexc=785 nm) equipped with a filtered fiber-optic probe. Each probed site was biopsied for matching histological diagnosis as assessed by an expert pathologist. Diagnostic algorithms were developed using genetic algorithm-based feature selection and linear discriminant analysis, and classification was performed on all spectra with a bootstrap-based cross-validation scheme. The analysis comprised 192 samples (112 non-dysplastic, 54 low-grade dysplasia and 26 high-grade dysplasia/early adenocarcinoma) from 65 patients. Compared with histology, NIRS differentiated dysplastic from non-dysplastic Barrett's samples with 86% sensitivity, 88% specificity and 87% accuracy. NIRS identified 'high-risk' lesions (high-grade dysplasia/early adenocarcinoma) with 88% sensitivity, 89% specificity and 89% accuracy. In the present study, NIRS classified Barrett's epithelia with high and clinically-useful diagnostic accuracy.

  17. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    NASA Astrophysics Data System (ADS)

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  18. Potential and limits to unravel the genetic architecture and predict the variation of Fusarium head blight resistance in European winter wheat (Triticum aestivum L.).

    PubMed

    Jiang, Y; Zhao, Y; Rodemann, B; Plieske, J; Kollers, S; Korzun, V; Ebmeyer, E; Argillier, O; Hinze, M; Ling, J; Röder, M S; Ganal, M W; Mette, M F; Reif, J C

    2015-03-01

    Genome-wide mapping approaches in diverse populations are powerful tools to unravel the genetic architecture of complex traits. The main goals of our study were to investigate the potential and limits to unravel the genetic architecture and to identify the factors determining the accuracy of prediction of the genotypic variation of Fusarium head blight (FHB) resistance in wheat (Triticum aestivum L.) based on data collected with a diverse panel of 372 European varieties. The wheat lines were phenotyped in multi-location field trials for FHB resistance and genotyped with 782 simple sequence repeat (SSR) markers, and 9k and 90k single-nucleotide polymorphism (SNP) arrays. We applied genome-wide association mapping in combination with fivefold cross-validations and observed surprisingly high accuracies of prediction for marker-assisted selection based on the detected quantitative trait loci (QTLs). Using a random sample of markers not selected for marker-trait associations revealed only a slight decrease in prediction accuracy compared with marker-based selection exploiting the QTL information. The same picture was confirmed in a simulation study, suggesting that relatedness is a main driver of the accuracy of prediction in marker-assisted selection of FHB resistance. When the accuracy of prediction of three genomic selection models was contrasted for the three marker data sets, no significant differences in accuracies among marker platforms and genomic selection models were observed. Marker density impacted the accuracy of prediction only marginally. Consequently, genomic selection of FHB resistance can be implemented most cost-efficiently based on low- to medium-density SNP arrays.

  19. Integration of genomic information into sport horse breeding programs for optimization of accuracy of selection.

    PubMed

    Haberland, A M; König von Borstel, U; Simianer, H; König, S

    2012-09-01

    Reliable selection criteria are required for young riding horses to increase genetic gain by increasing accuracy of selection and decreasing generation intervals. In this study, selection strategies incorporating genomic breeding values (GEBVs) were evaluated. Relevant stages of selection in sport horse breeding programs were analyzed by applying selection index theory. Results in terms of accuracies of indices (r(TI) ) and relative selection response indicated that information on single nucleotide polymorphism (SNP) genotypes considerably increases the accuracy of breeding values estimated for young horses without own or progeny performance. In a first scenario, the correlation between the breeding value estimated from the SNP genotype and the true breeding value (= accuracy of GEBV) was fixed to a relatively low value of r(mg) = 0.5. For a low heritability trait (h(2) = 0.15), and an index for a young horse based only on information from both parents, additional genomic information doubles r(TI) from 0.27 to 0.54. Including the conventional information source 'own performance' into the before mentioned index, additional SNP information increases r(TI) by 40%. Thus, particularly with regard to traits of low heritability, genomic information can provide a tool for well-founded selection decisions early in life. In a further approach, different sources of breeding values (e.g. GEBV and estimated breeding values (EBVs) from different countries) were combined into an overall index when altering accuracies of EBVs and correlations between traits. In summary, we showed that genomic selection strategies have the potential to contribute to a substantial reduction in generation intervals in horse breeding programs.

  20. Development and application of UHPLC-MS/MS method for the determination of phenolic compounds in Chamomile flowers and Chamomile tea extracts.

    PubMed

    Nováková, Lucie; Vildová, Anna; Mateus, Joana Patricia; Gonçalves, Tiago; Solich, Petr

    2010-09-15

    UHPLC-MS/MS method using BEH C18 analytical column was developed for the separation and quantitation of 12 phenolic compounds of Chamomile (Matricaria recutita L.). The separation was accomplished using gradient elution with mobile phase consisting of methanol and formic acid 0.1%. ESI in both positive and negative ion mode was optimized with the aim to reach high sensitivity and selectivity for quantitation using SRM experiment. ESI in negative ion mode was found to be more convenient for quantitative analysis of all phenolics except of chlorogenic acid and kaempherol, which demonstrated better results of linearity, accuracy and precision in ESI positive ion mode. The results of method validation confirmed, that developed UHPLC-MS/MS method was convenient and reliable for the determination of phenolic compounds in Chamomile extracts with linearity >0.9982, accuracy within 76.7-126.7% and precision within 2.2-12.7% at three spiked concentration levels. Method sensitivity expressed as LOQ was typically 5-20 nmol/l. Extracts of Chamomile flowers and Chamomile tea were subjected to UHPLC-MS/MS analysis. The most abundant phenolic compounds in both Chamomile flowers and Chamomile tea extracts were chlorogenic acid, umbelliferone, apigenin and apigenin-7-glucoside. In Chamomile tea extracts there was greater abundance of flavonoid glycosides such as rutin or quercitrin, while the aglycone apigenin and its glycoside were present in lower amount. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  1. A UPLC-MS/MS method for simultaneous determination of five flavonoids from Stellera chamaejasme L. in rat plasma and its application to a pharmacokinetic study.

    PubMed

    Li, Yun-Qing; Li, Cheng-Jian; Lv, Lei; Cao, Qing-Qing; Qian, Xian; Li, Si Wei; Wang, Hui; Zhao, Liang

    2018-06-01

    Stellera chamaejasme L. has been used as a traditional Chinese medicine for the treatment of scabies, tinea, stubborn skin ulcers, chronic tracheitis, cancer and tuberculosis. A sensitive and selective ultra-high liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method was developed and validated for the simultaneous determination of five flavonoids (stelleranol, chamaechromone, neochamaejasmin A, chamaejasmine and isochamaejasmin) of S. chamaejasme L. in rat plasma. Chromatographic separation was accomplished on an Agilent Poroshell 120 EC-C 18 column (2.1 × 100 mm, 2.7 μm) with gradient elution at a flow rate of 0.4 mL/min and the total analysis time was 7 min. The analytes were detected using multiple reaction monitoring in positive ionization mode. The samples were prepared by liquid-liquid extraction with ethyl acetate. The UPLC-MS/MS method was validated for specificity, linearity, sensitivity, accuracy and precision, recovery, matrix effect and stability. The validated method exhibited good linearity (r ≥ 0.9956), and the lower limits of quantification ranged from 0.51 to 0.64 ng/mL for five flavonoids. The intra- and inter-day precision were both <10.2%, and the accuracy ranged from -11.79 to 9.21%. This method was successfully applied to a pharmacokinetic study of five flavonoids in rats after oral administration of ethyl acetate extract of S. chamaejasme L. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Development and validation of sensitive LC/MS/MS method for quantitative bioanalysis of levonorgestrel in rat plasma and application to pharmacokinetics study.

    PubMed

    Ananthula, Suryatheja; Janagam, Dileep R; Jamalapuram, Seshulatha; Johnson, James R; Mandrell, Timothy D; Lowe, Tao L

    2015-10-15

    Rapid, sensitive, selective and accurate LC/MS/MS method was developed for quantitative determination of levonorgestrel (LNG) in rat plasma and further validated for specificity, linearity, accuracy, precision, sensitivity, matrix effect, recovery efficiency and stability. Liquid-liquid extraction procedure using hexane:ethyl acetate mixture at 80:20 v:v ratio was employed to efficiently extract LNG from rat plasma. Reversed phase Luna column C18(2) (50×2.0mm i.d., 3μM) installed on a AB SCIEX Triple Quad™ 4500 LC/MS/MS system was used to perform chromatographic separation. LNG was identified within 2min with high specificity. Linear calibration curve was drawn within 0.5-50ng·mL(-1) concentration range. The developed method was validated for intra-day and inter-day accuracy and precision whose values fell in the acceptable limits. Matrix effect was found to be minimal. Recovery efficiency at three quality control (QC) concentrations 0.5 (low), 5 (medium) and 50 (high) ng·mL(-1) was found to be >90%. Stability of LNG at various stages of experiment including storage, extraction and analysis was evaluated using QC samples, and the results showed that LNG was stable at all the conditions. This validated method was successfully used to study the pharmacokinetics of LNG in rats after SubQ injection, providing its applicability in relevant preclinical studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Optimization and validation of liquid chromatography and headspace-gas chromatography based methods for the quantitative determination of capsaicinoids, salicylic acid, glycol monosalicylate, methyl salicylate, ethyl salicylate, camphor and l-menthol in a topical formulation.

    PubMed

    Pauwels, Jochen; D'Autry, Ward; Van den Bossche, Larissa; Dewever, Cédric; Forier, Michel; Vandenwaeyenberg, Stephanie; Wolfs, Kris; Hoogmartens, Jos; Van Schepdael, Ann; Adams, Erwin

    2012-02-23

    Capsaicinoids, salicylic acid, methyl and ethyl salicylate, glycol monosalicylate, camphor and l-menthol are widely used in topical formulations to relieve local pain. For each separate compound or simple mixtures, quantitative analysis methods are reported. However, for a mixture containing all above mentioned active compounds, no assay methods were found. Due to the differing physicochemical characteristics, two methods were developed and optimized simultaneously. The non-volatile capsaicinoids, salicylic acid and glycol monosalicylate were analyzed with liquid chromatography following liquid-liquid extraction, whereas the volatile compounds were analyzed with static headspace-gas chromatography. For the latter method, liquid paraffin was selected as compatible dilution solvent. The optimized methods were validated in terms of specificity, linearity, accuracy and precision in a range of 80% to 120% of the expected concentrations. For both methods, peaks were well separated without interference of other compounds. Linear relationships were demonstrated with R² values higher than 0.996 for all compounds. Accuracy was assessed by performing replicate recovery experiments with spiked blank samples. Mean recovery values were all between 98% and 102%. Precision was checked at three levels: system repeatability, method precision and intermediate precision. Both methods were found to be acceptably precise at all three levels. Finally, the method was successfully applied to the analysis of some real samples (cutaneous sticks). Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Determination of low-level acrylamide in drinking water by liquid chromatography/tandem mass spectrometry.

    PubMed

    Lucentini, Luca; Ferretti, Emanuele; Veschetti, Enrico; Achene, Laura; Turrio-Baldassarri, Luigi; Ottaviani, Massimo; Bogialli, Sara

    2009-01-01

    A simple and sensitive liquid chromatographic-tandem mass spectrometric (LC/MS/MS) method has been developed and validated to confirm and quantify acrylamide monomer (AA) in drinking water using [13C3] acrylamide as internal standard (IS). After a preconcentration by solid-phase extraction with spherical activated carbon, analytes were chromatographed on IonPac ICE-AS1 column (9 x 250 mm) under isocratic conditions using acetonitrile-water-0.1 M formic acid (43 + 52 + 5, v/v/v) as the mobile phase. Analysis was achieved using a triple-quadrupole mass analyzer equipped with a turbo ion spray interface. For confirmation and quantification of the analytes, MS data acquisition was performed in the multireaction monitoring mode, selecting 2 precursor ion to product ion transitions for both AA and IS. The method was validated for linearity, sensitivity, accuracy, precision, extraction efficiency, and matrix effect. Linearity in tap water was observed over the concentration range 0.1-2.0 microg/L. Limits of detection and quantification were 0.02 and 0.1 microg/L, respectively. Interday and intraday assays were performed across 3 validation levels (0.1, 0.5, and 1.5 microg/L). Accuracy (as mean recovery) ranged from 89.3 to 96.2% with relative standard deviation <7.98%. Performance characteristics of this LC/MS/MS method make it suitable for regulatory confirmatory analysis of AA in drinking water in compliance with European Union and U.S. Environmental Protection Agency standards.

  5. The identification of high potential archers based on relative psychological coping skills variables: A Support Vector Machine approach

    NASA Astrophysics Data System (ADS)

    Taha, Zahari; Muazu Musa, Rabiu; Majeed, A. P. P. Abdul; Razali Abdullah, Mohamad; Aizzat Zakaria, Muhammad; Muaz Alim, Muhammad; Arif Mat Jizat, Jessnor; Fauzi Ibrahim, Mohamad

    2018-03-01

    Support Vector Machine (SVM) has been revealed to be a powerful learning algorithm for classification and prediction. However, the use of SVM for prediction and classification in sport is at its inception. The present study classified and predicted high and low potential archers from a collection of psychological coping skills variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. Psychological coping skills inventory which evaluates the archers level of related coping skills were filled out by the archers prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models, i.e. linear and fine radial basis function (RBF) kernel functions, were trained on the psychological variables. The k-means clustered the archers into high psychologically prepared archers (HPPA) and low psychologically prepared archers (LPPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy and precision throughout the exercise with an accuracy of 92% and considerably fewer error rate for the prediction of the HPPA and the LPPA as compared to the fine RBF SVM. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected psychological coping skills variables examined which would consequently save time and energy during talent identification and development programme.

  6. The attitude inversion method of geostationary satellites based on unscented particle filter

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  7. Application of texture analysis method for classification of benign and malignant thyroid nodules in ultrasound images.

    PubMed

    Abbasian Ardakani, Ali; Gharbali, Akbar; Mohammadi, Afshin

    2015-01-01

    The aim of this study was to evaluate computer aided diagnosis (CAD) system with texture analysis (TA) to improve radiologists' accuracy in identification of thyroid nodules as malignant or benign. A total of 70 cases (26 benign and 44 malignant) were analyzed in this study. We extracted up to 270 statistical texture features as a descriptor for each selected region of interests (ROIs) in three normalization schemes (default, 3s and 1%-99%). Then features by the lowest probability of classification error and average correlation coefficients (POE+ACC), and Fisher coefficient (Fisher) eliminated to 10 best and most effective features. These features were analyzed under standard and nonstandard states. For TA of the thyroid nodules, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA) were applied. First Nearest-Neighbour (1-NN) classifier was performed for the features resulting from PCA and LDA. NDA features were classified by artificial neural network (A-NN). Receiver operating characteristic (ROC) curve analysis was used for examining the performance of TA methods. The best results were driven in 1-99% normalization with features extracted by POE+ACC algorithm and analyzed by NDA with the area under the ROC curve ( Az) of 0.9722 which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Our results indicate that TA is a reliable method, can provide useful information help radiologist in detection and classification of benign and malignant thyroid nodules.

  8. Prediction of 5-year overall survival in cervical cancer patients treated with radical hysterectomy using computational intelligence methods.

    PubMed

    Obrzut, Bogdan; Kusy, Maciej; Semczuk, Andrzej; Obrzut, Marzanna; Kluska, Jacek

    2017-12-12

    Computational intelligence methods, including non-linear classification algorithms, can be used in medical research and practice as a decision making tool. This study aimed to evaluate the usefulness of artificial intelligence models for 5-year overall survival prediction in patients with cervical cancer treated by radical hysterectomy. The data set was collected from 102 patients with cervical cancer FIGO stage IA2-IIB, that underwent primary surgical treatment. Twenty-three demographic, tumor-related parameters and selected perioperative data of each patient were collected. The simulations involved six computational intelligence methods: the probabilistic neural network (PNN), multilayer perceptron network, gene expression programming classifier, support vector machines algorithm, radial basis function neural network and k-Means algorithm. The prediction ability of the models was determined based on the accuracy, sensitivity, specificity, as well as the area under the receiver operating characteristic curve. The results of the computational intelligence methods were compared with the results of linear regression analysis as a reference model. The best results were obtained by the PNN model. This neural network provided very high prediction ability with an accuracy of 0.892 and sensitivity of 0.975. The area under the receiver operating characteristics curve of PNN was also high, 0.818. The outcomes obtained by other classifiers were markedly worse. The PNN model is an effective tool for predicting 5-year overall survival in cervical cancer patients treated with radical hysterectomy.

  9. Concurrent determination of olanzapine, risperidone and 9-hydroxyrisperidone in human plasma by ultra performance liquid chromatography with diode array detection method: application to pharmacokinetic study.

    PubMed

    Siva Selva Kumar, M; Ramanathan, M

    2016-02-01

    A simple and sensitive ultra-performance liquid chromatography (UPLC) method has been developed and validated for simultaneous estimation of olanzapine (OLZ), risperidone (RIS) and 9-hydroxyrisperidone (9-OHRIS) in human plasma in vitro. The sample preparation was performed by simple liquid-liquid extraction technique. The analytes were chromatographed on a Waters Acquity H class UPLC system using isocratic mobile phase conditions at a flow rate of 0.3 mL/min and Acquity UPLC BEH shield RP18 column maintained at 40°C. Quantification was performed on a photodiode array detector set at 277 nm and clozapine was used as internal standard (IS). OLZ, RIS, 9-OHRIS and IS retention times were found to be 0.9, 1.4, .1.8 and 3.1 min, respectively, and the total run time was 4 min. The method was validated for selectivity, specificity, recovery, linearity, accuracy, precision and sample stability. The calibration curve was linear over the concentration range 1-100 ng/mL for OLZ, RIS and 9-OHRIS. Intra- and inter-day precisions for OLZ, RIS and 9-OHRIS were found to be good with the coefficient of variation <6.96%, and the accuracy ranging from 97.55 to 105.41%, in human plasma. The validated UPLC method was successfully applied to the pharmacokinetic study of RIS and 9-OHRIS in human plasma. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Fast and parallel determination of PCB 77 and PCB 180 in plasma using ultra performance liquid chromatography with diode array detection: A pharmacokinetic study in Swiss albino mouse.

    PubMed

    Ramanujam, N; Sivaselvakumar, M; Ramalingam, S

    2017-11-01

    A simple, sensitive and reproducible ultra-performance liquid chromatography (UPLC) method has been developed and validated for simultaneous estimation of polychlorinated biphenyl (PCB) 77 and PCB 180 in mouse plasma. The sample preparation was performed by simple liquid-liquid extraction technique. The analytes were chromatographed on a Waters Acquity H class UPLC system using isocratic mobile phase conditions at a flow rate of 0.3 mL/min and Acquity UPLC BEH shield RP 18 column maintained at 35°C. Quantification was performed on a photodiode array detector set at 215 nm and PCB 101 was used as internal standard (IS). PCB 77, PCB 180, and IS retention times were 2.6, 4.7 and 2.8 min, respectively, and the total run time was 6 min. The method was validated for specificity, selectivity, recovery, linearity, accuracy, precision and sample stability. The calibration curve was linear over the concentration range 10-3000 ng/mL for PCB 77 and PCB 180. Intra- and inter-day precisions for PCBs 77 and 180 were found to be good with CV <4.64%, and the accuracy ranged from 98.90 to 102.33% in mouse plasma. The validated UPLC method was successfully applied to the pharmacokinetic study of PCBs 77 and 180 in mouse plasma. Copyright © 2017 John Wiley & Sons, Ltd.

  11. DETERMINATION OF AZOXYSTROBIN AND DIFENOCONAZOLE IN PESTICIDE PRODUCTS.

    PubMed

    Lazić, S; Šunjka, D

    2015-01-01

    In this study a high performance liquid chromatographic (HPLC-DAD) procedure has been developed for the simultaneous determination of azoxystrobin and difenoconazole in suspension concentrate pesticide formulations, with the aim of the product quality control. Azoxystrobin, strobilurin fungicide and difenoconazole (cis,trans-3-chloro-4-[4-methyl-2-(1H-1,2,4-triazol-1-ylmethyl)-1,3-dioxolan-2-yl]phenyl 4-chlorophenyl ether), triazole fungicide, are used for the protection of plants from wide spectrum of fungal diseases. For the analysis LC system an Agilent Technologies 1100 Series was used. Good separation was achieved on a Zorbax SB-C18 column (5 μm, 250 mm x 3 mm internal diameter) using a mobile phase consisting of acetonitrile/ultrapure water (90:10, v/v), at a flow rate of 0.9 ml/minute and UV detection at 218 nm. Column temperature was 25 degrees C, injected volume was 1 μl. Retention times for azoxystrobin and difenoconazole were 2.504 min and 1.963 min, respectively. This method is validated according to the requirements for new methods, which include linearity, precision, accuracy and selectivity. The method demonstrates good linearity with r2 > 0.997. The repeatability of the method, expressed as relative standard deviation (RSD, %), was found to be 1.9% for azoxystrobin and 0.5% for difenoconazole. The precision of the method was also considered to be acceptable as the experimental repeatability relative standard deviation (RSD) was lower than the RSD calculated using the Horwitz equation of 1.7% and 1.4% for azoxystrobin and difenoconazole, respectively. The accuracy of the proposed method was determined from recovery experiments through standard addition procedure. The average recoveries of the three fortification levels were 101.9% for azoxystrobin and 103.2% for difenoconazole with RSDs of 1.1% and 1.2%. The method described in this paper is simple, precise, accurate and selective and represents a new and reliable way of simultaneous determination of azoxystrobin and difenoconazole in formulated products.

  12. Development and validation of a selective, sensitive and stability indicating UPLC-MS/MS method for rapid, simultaneous determination of six process related impurities in darunavir drug substance.

    PubMed

    A, Vijaya Bhaskar Reddy; Yusop, Zulkifli; Jaafar, Jafariah; Aris, Azmi B; Majid, Zaiton A; Umar, Khalid; Talib, Juhaizah

    2016-09-05

    In this study a sensitive and selective gradient reverse phase UPLC-MS/MS method was developed for the simultaneous determination of six process related impurities viz., Imp-I, Imp-II, Imp-III, Imp-IV, Imp-V and Imp-VI in darunavir. The chromatographic separation was performed on Acquity UPLC BEH C18 (50 mm×2.1mm, 1.7μm) column using gradient elution of acetonitrile-methanol (80:20, v/v) and 5.0mM ammonium acetate containing 0.01% formic acid at a flow rate of 0.4mL/min. Both negative and positive electrospray ionization (ESI) modes were operated simultaneously using multiple reaction monitoring (MRM) for the quantification of all six impurities in darunavir. The developed method was fully validated following ICH guidelines with respect to specificity, linearity, limit of detection (LOD), limit of quantification (LOQ), accuracy, precision, robustness and sample solution stability. The method was able to quantitate Imp-I, Imp-IV, Imp-V at 0.3ppm and Imp-II, Imp-III, and Imp-VI at 0.2ppm with respect to 5.0mg/mL of darunavir. The calibration curves showed good linearity over the concentration range of LOQ to 250% for all six impurities. The correlation coefficient obtained was >0.9989 in all the cases. The accuracy of the method lies between 89.90% and 104.60% for all six impurities. Finally, the method has been successfully applied for three formulation batches of darunavir to determine the above mentioned impurities, however no impurity was found beyond the LOQ. This method is a good quality control tool for the trace level quantification of six process related impurities in darunavir during its synthesis. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  14. Kinetic rate constant prediction supports the conformational selection mechanism of protein binding.

    PubMed

    Moal, Iain H; Bates, Paul A

    2012-01-01

    The prediction of protein-protein kinetic rate constants provides a fundamental test of our understanding of molecular recognition, and will play an important role in the modeling of complex biological systems. In this paper, a feature selection and regression algorithm is applied to mine a large set of molecular descriptors and construct simple models for association and dissociation rate constants using empirical data. Using separate test data for validation, the predicted rate constants can be combined to calculate binding affinity with accuracy matching that of state of the art empirical free energy functions. The models show that the rate of association is linearly related to the proportion of unbound proteins in the bound conformational ensemble relative to the unbound conformational ensemble, indicating that the binding partners must adopt a geometry near to that of the bound prior to binding. Mirroring the conformational selection and population shift mechanism of protein binding, the models provide a strong separate line of evidence for the preponderance of this mechanism in protein-protein binding, complementing structural and theoretical studies.

  15. Estimation of genetic parameters and response to selection for a continuous trait subject to culling before testing.

    PubMed

    Arnason, T; Albertsdóttir, E; Fikse, W F; Eriksson, S; Sigurdsson, A

    2012-02-01

    The consequences of assuming a zero environmental covariance between a binary trait 'test-status' and a continuous trait on the estimates of genetic parameters by restricted maximum likelihood and Gibbs sampling and on response from genetic selection when the true environmental covariance deviates from zero were studied. Data were simulated for two traits (one that culling was based on and a continuous trait) using the following true parameters, on the underlying scale: h² = 0.4; r(A) = 0.5; r(E) = 0.5, 0.0 or -0.5. The selection on the continuous trait was applied to five subsequent generations where 25 sires and 500 dams produced 1500 offspring per generation. Mass selection was applied in the analysis of the effect on estimation of genetic parameters. Estimated breeding values were used in the study of the effect of genetic selection on response and accuracy. The culling frequency was either 0.5 or 0.8 within each generation. Each of 10 replicates included 7500 records on 'test-status' and 9600 animals in the pedigree file. Results from bivariate analysis showed unbiased estimates of variance components and genetic parameters when true r(E) = 0.0. For r(E) = 0.5, variance components (13-19% bias) and especially (50-80%) were underestimated for the continuous trait, while heritability estimates were unbiased. For r(E) = -0.5, heritability estimates of test-status were unbiased, while genetic variance and heritability of the continuous trait together with were overestimated (25-50%). The bias was larger for the higher culling frequency. Culling always reduced genetic progress from selection, but the genetic progress was found to be robust to the use of wrong parameter values of the true environmental correlation between test-status and the continuous trait. Use of a bivariate linear-linear model reduced bias in genetic evaluations, when data were subject to culling. © 2011 Blackwell Verlag GmbH.

  16. A Kinematic Calibration Process for Flight Robotic Arms

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.

  17. Investigation of tracking systems properties in CAVE-type virtual reality systems

    NASA Astrophysics Data System (ADS)

    Szymaniak, Magda; Mazikowski, Adam; Meironke, Michał

    2017-08-01

    In recent years, many scientific and industrial centers in the world developed a virtual reality systems or laboratories. One of the most advanced solutions are Immersive 3D Visualization Lab (I3DVL), a CAVE-type (Cave Automatic Virtual Environment) laboratory. It contains two CAVE-type installations: six-screen installation arranged in a form of a cube, and four-screen installation, a simplified version of the previous one. The user feeling of "immersion" and interaction with virtual world depend on many factors, in particular on the accuracy of the tracking system of the user. In this paper properties of the tracking systems applied in I3DVL was investigated. For analysis two parameters were selected: the accuracy of the tracking system and the range of detection of markers by the tracking system in space of the CAVE. Measurements of system accuracy were performed for six-screen installation, equipped with four tracking cameras for three axes: X, Y, Z. Rotation around the Y axis was also analyzed. Measured tracking system shows good linear and rotating accuracy. The biggest issue was the range of the monitoring of markers inside the CAVE. It turned out, that the tracking system lose sight of the markers in the corners of the installation. For comparison, for a simplified version of CAVE (four-screen installation), equipped with eight tracking cameras, this problem was not occur. Obtained results will allow for improvement of cave quality.

  18. Highly sensitive spectrofluorimetric determination of lomefloxacin in spiked human plasma, urine and pharmaceutical preparations.

    PubMed

    Ulu, Sevgi Tatar

    2009-09-01

    A sensitive, simple and selective spectrofluorimetric method was developed for the determination of lomefloxacin in biological fluids and pharmaceutical preparations. The method is based on the reaction between the drug and 4-chloro-7-nitrobenzodioxazole in borate buffer of pH 8.5 to yield a highly fluorescent derivative that is measured at 533 nm after excitation at 433 nm. The calibration curves were linear over the concentration ranges of 12.5-625, 15-1500 and 20-2000 ng/mL for plasma, urine and standard solution, respectively. The limits of detection were 4.0 ng/mL in plasma, 5.0 ng/mL in urine and 7.0 ng/mL in standard solution. The intra-assay accuracy and precision in plasma ranged from 0.032 to 2.40% and 0.23 to 0.36%, respectively, while inter-assay accuracy and precision ranged from 0.45 to 2.10% and 0.25 to 0.38%, respectively. The intra-assay accuracy and precision estimated on spiked samples in urine ranged from 1.27 to 4.20% and 0.12 to 0.24%, respectively, while inter-assay accuracy and precision ranged from 1.60 to 4.00% and 0.14 to 0.25%, respectively. The mean recovery of lomefloxacin from plasma and urine was 98.34 and 98.43%, respectively. The method was successfully applied to the determination of lomefloxacin in pharmaceuticals and biological fluids.

  19. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    PubMed

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  20. Selection on signal–reward correlation: limits and opportunities to the evolution of deceit in Turnera ulmifolia L.

    PubMed

    Benitez-Vieyra, S; Ordano, M; Fornoni, J; Boege, K; Domínguez, C A

    2010-12-01

    Because pollinators are unable to directly assess the amount of rewards offered by flowers, they rely on the information provided by advertising floral traits. Thus, having a lower intra-individual correlation between signal and reward (signal accuracy) than other plants in the population provides the opportunity to reduce investment in rewards and cheat pollinators. However, pollinators' cognitive capacities can impose a limit to the evolution of this plant cheating strategy if they can punish those plants with low signal accuracy. In this study, we examined the opportunity for cheating in the perennial weed Turnera ulmifolia L. evaluating the selective value of signal accuracy, floral display and reward production in a natural population. We found that plant reproductive success was positively related to signal accuracy and floral display, but not to nectar production. The intensity of selection on floral display was more than three times higher than on signal accuracy. The pattern of selection indicated that pollinators can select for signal accuracy provided by plants and suggests that learning abilities of pollinators can limit the evolution of deceptive strategies in T. ulmifolia. © 2010 The Authors. Journal Compilation © 2010 European Society For Evolutionary Biology.

  1. Facile quantitation of free thiols in a recombinant monoclonal antibody by reversed-phase high performance liquid chromatography with hydrophobicity-tailored thiol derivatization.

    PubMed

    Welch, Leslie; Dong, Xiao; Hewitt, Daniel; Irwin, Michelle; McCarty, Luke; Tsai, Christina; Baginski, Tomasz

    2018-06-02

    Free thiol content, and its consistency, is one of the product quality attributes of interest during technical development of manufactured recombinant monoclonal antibodies (mAbs). We describe a new, mid/high-throughput reversed-phase-high performance liquid chromatography (RP-HPLC) method coupled with derivatization of free thiols, for the determination of total free thiol content in an E. coli-expressed therapeutic monovalent monoclonal antibody mAb1. Initial selection of the derivatization reagent used an hydrophobicity-tailored approach. Maleimide-based thiol-reactive reagents with varying degrees of hydrophobicity were assessed to identify and select one that provided adequate chromatographic resolution and robust quantitation of free thiol-containing mAb1 forms. The method relies on covalent derivatization of free thiols in denatured mAb1 with N-tert-butylmaleimide (NtBM) label, followed by RP-HPLC separation with UV-based quantitation of native (disulfide containing) and labeled (free thiol containing) forms. The method demonstrated good specificity, precision, linearity, accuracy and robustness. Accuracy of the method, for samples with a wide range of free thiol content, was demonstrated using admixtures as well as by comparison to an orthogonal LC-MS peptide mapping method with isotope tagging of free thiols. The developed method has a facile workflow which fits well into both R&D characterization and quality control (QC) testing environments. The hydrophobicity-tailored approach to the selection of free thiol derivatization reagent is easily applied to the rapid development of free thiol quantitation methods for full-length recombinant antibodies. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Development of a computer aided diagnosis model for prostate cancer classification on multi-parametric MRI

    NASA Astrophysics Data System (ADS)

    Alfano, R.; Soetemans, D.; Bauman, G. S.; Gibson, E.; Gaed, M.; Moussa, M.; Gomez, J. A.; Chin, J. L.; Pautler, S.; Ward, A. D.

    2018-02-01

    Multi-parametric MRI (mp-MRI) is becoming a standard in contemporary prostate cancer screening and diagnosis, and has shown to aid physicians in cancer detection. It offers many advantages over traditional systematic biopsy, which has shown to have very high clinical false-negative rates of up to 23% at all stages of the disease. However beneficial, mp-MRI is relatively complex to interpret and suffers from inter-observer variability in lesion localization and grading. Computer-aided diagnosis (CAD) systems have been developed as a solution as they have the power to perform deterministic quantitative image analysis. We measured the accuracy of such a system validated using accurately co-registered whole-mount digitized histology. We trained a logistic linear classifier (LOGLC), support vector machine (SVC), k-nearest neighbour (KNN) and random forest classifier (RFC) in a four part ROI based experiment against: 1) cancer vs. non-cancer, 2) high-grade (Gleason score ≥4+3) vs. low-grade cancer (Gleason score <4+3), 3) high-grade vs. other tissue components and 4) high-grade vs. benign tissue by selecting the classifier with the highest AUC using 1-10 features from forward feature selection. The CAD model was able to classify malignant vs. benign tissue and detect high-grade cancer with high accuracy. Once fully validated, this work will form the basis for a tool that enhances the radiologist's ability to detect malignancies, potentially improving biopsy guidance, treatment selection, and focal therapy for prostate cancer patients, maximizing the potential for cure and increasing quality of life.

  3. A fast and direct spectrophotometric method for the simultaneous determination of methyl paraben and hydroquinone in cosmetic products using successive projections algorithm.

    PubMed

    Esteki, M; Nouroozi, S; Shahsavari, Z

    2016-02-01

    To develop a simple and efficient spectrophotometric technique combined with chemometrics for the simultaneous determination of methyl paraben (MP) and hydroquinone (HQ) in cosmetic products, and specifically, to: (i) evaluate the potential use of successive projections algorithm (SPA) to derivative spectrophotometric data in order to provide sufficient accuracy and model robustness and (ii) determine MP and HQ concentration in cosmetics without tedious pre-treatments such as derivatization or extraction techniques which are time-consuming and require hazardous solvents. The absorption spectra were measured in the wavelength range of 200-350 nm. Prior to performing chemometric models, the original and first-derivative absorption spectra of binary mixtures were used as calibration matrices. Variable selected by successive projections algorithm was used to obtain multiple linear regression (MLR) models based on a small subset of wavelengths. The number of wavelengths and the starting vector were optimized, and the comparison of the root mean square error of calibration (RMSEC) and cross-validation (RMSECV) was applied to select effective wavelengths with the least collinearity and redundancy. Principal component regression (PCR) and partial least squares (PLS) were also developed for comparison. The concentrations of the calibration matrix ranged from 0.1 to 20 μg mL(-1) for MP, and from 0.1 to 25 μg mL(-1) for HQ. The constructed models were tested on an external validation data set and finally cosmetic samples. The results indicated that successive projections algorithm-multiple linear regression (SPA-MLR), applied on the first-derivative spectra, achieved the optimal performance for two compounds when compared with the full-spectrum PCR and PLS. The root mean square error of prediction (RMSEP) was 0.083, 0.314 for MP and HQ, respectively. To verify the accuracy of the proposed method, a recovery study on real cosmetic samples was carried out with satisfactory results (84-112%). The proposed method, which is an environmentally friendly approach, using minimum amount of solvent, is a simple, fast and low-cost analysis method that can provide high accuracy and robust models. The suggested method does not need any complex extraction procedure which is time-consuming and requires hazardous solvents. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  4. A general strategy for performing temperature-programming in high performance liquid chromatography--further improvements in the accuracy of retention time predictions of segmented temperature gradients.

    PubMed

    Wiese, Steffen; Teutenberg, Thorsten; Schmidt, Torsten C

    2012-01-27

    In the present work it is shown that the linear elution strength (LES) model which was adapted from temperature-programming gas chromatography (GC) can also be employed for systematic method development in high-temperature liquid chromatography (HT-HPLC). The ability to predict isothermal retention times based on temperature-gradient as well as isothermal input data was investigated. For a small temperature interval of ΔT=40°C, both approaches result in very similar predictions. Average relative errors of predicted retention times of 2.7% and 1.9% were observed for simulations based on isothermal and temperature-gradient measurements, respectively. Concurrently, it was investigated whether the accuracy of retention time predictions of segmented temperature gradients can be further improved by temperature dependent calculation of the parameter S(T) of the LES relationship. It was found that the accuracy of retention time predictions of multi-step temperature gradients can be improved to around 1.5%, if S(T) was also calculated temperature dependent. The adjusted experimental design making use of four temperature-gradient measurements was applied for systematic method development of selected food additives by high-temperature liquid chromatography. Method development was performed within a temperature interval from 40°C to 180°C using water as mobile phase. Two separation methods were established where selected food additives were baseline separated. In addition, a good agreement between simulation and experiment was observed, because an average relative error of predicted retention times of complex segmented temperature gradients less than 5% was observed. Finally, a schedule of recommendations to assist the practitioner during systematic method development in high-temperature liquid chromatography was established. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Music and natural sounds in an auditory steady-state response based brain-computer interface to increase user acceptance.

    PubMed

    Heo, Jeong; Baek, Hyun Jae; Hong, Seunghyeok; Chang, Min Hye; Lee, Jeong Su; Park, Kwang Suk

    2017-05-01

    Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Accuracy of estimation of genomic breeding values in pigs using low-density genotypes and imputation.

    PubMed

    Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P

    2014-04-16

    Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation.

  7. Kernel-Based Relevance Analysis with Enhanced Interpretability for Detection of Brain Activity Patterns

    PubMed Central

    Alvarez-Meza, Andres M.; Orozco-Gutierrez, Alvaro; Castellanos-Dominguez, German

    2017-01-01

    We introduce Enhanced Kernel-based Relevance Analysis (EKRA) that aims to support the automatic identification of brain activity patterns using electroencephalographic recordings. EKRA is a data-driven strategy that incorporates two kernel functions to take advantage of the available joint information, associating neural responses to a given stimulus condition. Regarding this, a Centered Kernel Alignment functional is adjusted to learning the linear projection that best discriminates the input feature set, optimizing the required free parameters automatically. Our approach is carried out in two scenarios: (i) feature selection by computing a relevance vector from extracted neural features to facilitating the physiological interpretation of a given brain activity task, and (ii) enhanced feature selection to perform an additional transformation of relevant features aiming to improve the overall identification accuracy. Accordingly, we provide an alternative feature relevance analysis strategy that allows improving the system performance while favoring the data interpretability. For the validation purpose, EKRA is tested in two well-known tasks of brain activity: motor imagery discrimination and epileptic seizure detection. The obtained results show that the EKRA approach estimates a relevant representation space extracted from the provided supervised information, emphasizing the salient input features. As a result, our proposal outperforms the state-of-the-art methods regarding brain activity discrimination accuracy with the benefit of enhanced physiological interpretation about the task at hand. PMID:29056897

  8. Statistical analysis of textural features for improved classification of oral histopathological images.

    PubMed

    Muthu Rama Krishnan, M; Shah, Pratik; Chakraborty, Chandan; Ray, Ajoy K

    2012-04-01

    The objective of this paper is to provide an improved technique, which can assist oncopathologists in correct screening of oral precancerous conditions specially oral submucous fibrosis (OSF) with significant accuracy on the basis of collagen fibres in the sub-epithelial connective tissue. The proposed scheme is composed of collagen fibres segmentation, its textural feature extraction and selection, screening perfomance enhancement under Gaussian transformation and finally classification. In this study, collagen fibres are segmented on R,G,B color channels using back-probagation neural network from 60 normal and 59 OSF histological images followed by histogram specification for reducing the stain intensity variation. Henceforth, textural features of collgen area are extracted using fractal approaches viz., differential box counting and brownian motion curve . Feature selection is done using Kullback-Leibler (KL) divergence criterion and the screening performance is evaluated based on various statistical tests to conform Gaussian nature. Here, the screening performance is enhanced under Gaussian transformation of the non-Gaussian features using hybrid distribution. Moreover, the routine screening is designed based on two statistical classifiers viz., Bayesian classification and support vector machines (SVM) to classify normal and OSF. It is observed that SVM with linear kernel function provides better classification accuracy (91.64%) as compared to Bayesian classifier. The addition of fractal features of collagen under Gaussian transformation improves Bayesian classifier's performance from 80.69% to 90.75%. Results are here studied and discussed.

  9. Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process

    NASA Astrophysics Data System (ADS)

    Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas

    2018-05-01

    This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.

  10. Innovative Technique for Noise Reduction in Spacecraft Doppler Tracking for Planetary Interior Studies

    NASA Astrophysics Data System (ADS)

    Notaro, V.; Armstrong, J. W.; Asmar, S.; Di Ruscio, A.; Iess, L.; Mariani, M., Jr.

    2017-12-01

    Precise measurements of spacecraft range rate, enabled by two-way microwave links, are used in radio science experiments for planetary geodesy including the determination of planetary gravitational fields for the purpose of modeling the interior structure. The final accuracies in the estimated gravity harmonic coefficients depend almost linearly on the Doppler noise in the link. We ran simulations to evaluate the accuracy improvement attainable in the estimation of the gravity harmonic coefficients of Venus (with a representative orbiter) and Mercury (with the BepiColombo spacecraft), using our proposed innovative noise-cancellation technique. We showed how the use of an additional, smaller and stiffer, receiving-only antenna could reduce the leading noise sources in a Ka-band two-way link such as tropospheric and antenna mechanical noises. This is achieved through a suitable linear combination (LC) of Doppler observables collected at the two antennas at different times. In our simulations, we considered a two-way link either from NASA's DSS 25 antenna in California or from ESA's DSA-3 antenna in Malargüe (Argentina). Moreover, we selected the 12-m Atacama Pathfinder EXperiment (APEX) in Chile as the three-way antenna and developed its tropospheric noise model using available atmospheric data and mechanical stability specifications. For an 8-hour Venus orbiter tracking pass in Chajnantor's winter/night conditions, the accuracy of the simulated LC Doppler observable at 10-s integration time is 6 mm/s, to be compared to 23 mm/s for the two-way link. For BepiColombo, we obtained 16.5 mm/s and 35 mm/s, respectively for the LC and two-way links. The benefits are even larger at longer time scales. Numerical simulations indicate that such noise reduction would provide significant improvements in the determination of Venus's and Mercury's gravity field coefficients. If implemented, this noise-reducing technique will be valuable for planetary geodesy missions, where the accuracy in the estimation of high-order gravity harmonic coefficients is limited by tropospheric and antenna mechanical noises that are difficult to reduce at short integration times. Benefits are however expected in all precision radio science experiments with deep space probes.

  11. Assessment of Chlorophyll-a Algorithms Considering Different Trophic Statuses and Optimal Bands.

    PubMed

    Salem, Salem Ibrahim; Higa, Hiroto; Kim, Hyungjun; Kobayashi, Hiroshi; Oki, Kazuo; Oki, Taikan

    2017-07-31

    Numerous algorithms have been proposed to retrieve chlorophyll- a concentrations in Case 2 waters; however, the retrieval accuracy is far from satisfactory. In this research, seven algorithms are assessed with different band combinations of multispectral and hyperspectral bands using linear (LN), quadratic polynomial (QP) and power (PW) regression approaches, resulting in altogether 43 algorithmic combinations. These algorithms are evaluated by using simulated and measured datasets to understand the strengths and limitations of these algorithms. Two simulated datasets comprising 500,000 reflectance spectra each, both based on wide ranges of inherent optical properties (IOPs), are generated for the calibration and validation stages. Results reveal that the regression approach (i.e., LN, QP, and PW) has more influence on the simulated dataset than on the measured one. The algorithms that incorporated linear regression provide the highest retrieval accuracy for the simulated dataset. Results from simulated datasets reveal that the 3-band (3b) algorithm that incorporate 665-nm and 680-nm bands and band tuning selection approach outperformed other algorithms with root mean square error (RMSE) of 15.87 mg·m -3 , 16.25 mg·m -3 , and 19.05 mg·m -3 , respectively. The spatial distribution of the best performing algorithms, for various combinations of chlorophyll- a (Chla) and non-algal particles (NAP) concentrations, show that the 3b_tuning_QP and 3b_680_QP outperform other algorithms in terms of minimum RMSE frequency of 33.19% and 60.52%, respectively. However, the two algorithms failed to accurately retrieve Chla for many combinations of Chla and NAP, particularly for low Chla and NAP concentrations. In addition, the spatial distribution emphasizes that no single algorithm can provide outstanding accuracy for Chla retrieval and that multi-algorithms should be included to reduce the error. Comparing the results of the measured and simulated datasets reveal that the algorithms that incorporate the 665-nm band outperform other algorithms for measured dataset (RMSE = 36.84 mg·m -3 ), while algorithms that incorporate the band tuning approach provide the highest retrieval accuracy for the simulated dataset (RMSE = 25.05 mg·m -3 ).

  12. Assessment of Chlorophyll-a Algorithms Considering Different Trophic Statuses and Optimal Bands

    PubMed Central

    Higa, Hiroto; Kobayashi, Hiroshi; Oki, Kazuo

    2017-01-01

    Numerous algorithms have been proposed to retrieve chlorophyll-a concentrations in Case 2 waters; however, the retrieval accuracy is far from satisfactory. In this research, seven algorithms are assessed with different band combinations of multispectral and hyperspectral bands using linear (LN), quadratic polynomial (QP) and power (PW) regression approaches, resulting in altogether 43 algorithmic combinations. These algorithms are evaluated by using simulated and measured datasets to understand the strengths and limitations of these algorithms. Two simulated datasets comprising 500,000 reflectance spectra each, both based on wide ranges of inherent optical properties (IOPs), are generated for the calibration and validation stages. Results reveal that the regression approach (i.e., LN, QP, and PW) has more influence on the simulated dataset than on the measured one. The algorithms that incorporated linear regression provide the highest retrieval accuracy for the simulated dataset. Results from simulated datasets reveal that the 3-band (3b) algorithm that incorporate 665-nm and 680-nm bands and band tuning selection approach outperformed other algorithms with root mean square error (RMSE) of 15.87 mg·m−3, 16.25 mg·m−3, and 19.05 mg·m−3, respectively. The spatial distribution of the best performing algorithms, for various combinations of chlorophyll-a (Chla) and non-algal particles (NAP) concentrations, show that the 3b_tuning_QP and 3b_680_QP outperform other algorithms in terms of minimum RMSE frequency of 33.19% and 60.52%, respectively. However, the two algorithms failed to accurately retrieve Chla for many combinations of Chla and NAP, particularly for low Chla and NAP concentrations. In addition, the spatial distribution emphasizes that no single algorithm can provide outstanding accuracy for Chla retrieval and that multi-algorithms should be included to reduce the error. Comparing the results of the measured and simulated datasets reveal that the algorithms that incorporate the 665-nm band outperform other algorithms for measured dataset (RMSE = 36.84 mg·m−3), while algorithms that incorporate the band tuning approach provide the highest retrieval accuracy for the simulated dataset (RMSE = 25.05 mg·m−3). PMID:28758984

  13. Accuracy of genetic code translation and its orthogonal corruption by aminoglycosides and Mg2+ ions

    PubMed Central

    Zhang, Jingji

    2018-01-01

    Abstract We studied the effects of aminoglycosides and changing Mg2+ ion concentration on the accuracy of initial codon selection by aminoacyl-tRNA in ternary complex with elongation factor Tu and GTP (T3) on mRNA programmed ribosomes. Aminoglycosides decrease the accuracy by changing the equilibrium constants of ‘monitoring bases’ A1492, A1493 and G530 in 16S rRNA in favor of their ‘activated’ state by large, aminoglycoside-specific factors, which are the same for cognate and near-cognate codons. Increasing Mg2+ concentration decreases the accuracy by slowing dissociation of T3 from its initial codon- and aminoglycoside-independent binding state on the ribosome. The distinct accuracy-corrupting mechanisms for aminoglycosides and Mg2+ ions prompted us to re-interpret previous biochemical experiments and functional implications of existing high resolution ribosome structures. We estimate the upper thermodynamic limit to the accuracy, the ‘intrinsic selectivity’ of the ribosome. We conclude that aminoglycosides do not alter the intrinsic selectivity but reduce the fraction of it that is expressed as the accuracy of initial selection. We suggest that induced fit increases the accuracy and speed of codon reading at unaltered intrinsic selectivity of the ribosome. PMID:29267976

  14. Testing the Linearity of the Cosmic Origins Spectrograph FUV Channel Thermal Correction

    NASA Astrophysics Data System (ADS)

    Fix, Mees B.; De Rosa, Gisella; Sahnow, David

    2018-05-01

    The Far Ultraviolet Cross Delay Line (FUV XDL) detector on the Cosmic Origins Spectrograph (COS) is subject to temperature-dependent distortions. The correction performed by the COS calibration pipeline (CalCOS) assumes that these changes are linear across the detector. In this report we evaluate the accuracy of the linear approximations using data obtained on orbit. Our results show that the thermal distortions are consistent with our current linear model.

  15. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  16. [Retrieval of crown closure of moso bamboo forest using unmanned aerial vehicle (UAV) remotely sensed imagery based on geometric-optical model].

    PubMed

    Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long

    2015-05-01

    This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest.

  17. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  18. Constructing an Efficient Self-Tuning Aircraft Engine Model for Control and Health Management Applications

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.

  19. The large discretization step method for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  20. A novel quantitative analysis method of three-dimensional fluorescence spectra for vegetable oils contents in edible blend oil

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Wang, Yu-Tian; Liu, Xiao-Fei

    2015-04-01

    Edible blend oil is a mixture of vegetable oils. Eligible blend oil can meet the daily need of two essential fatty acids for human to achieve the balanced nutrition. Each vegetable oil has its different composition, so vegetable oils contents in edible blend oil determine nutritional components in blend oil. A high-precision quantitative analysis method to detect the vegetable oils contents in blend oil is necessary to ensure balanced nutrition for human being. Three-dimensional fluorescence technique is high selectivity, high sensitivity, and high-efficiency. Efficiency extraction and full use of information in tree-dimensional fluorescence spectra will improve the accuracy of the measurement. A novel quantitative analysis is proposed based on Quasi-Monte-Carlo integral to improve the measurement sensitivity and reduce the random error. Partial least squares method is used to solve nonlinear equations to avoid the effect of multicollinearity. The recovery rates of blend oil mixed by peanut oil, soybean oil and sunflower are calculated to verify the accuracy of the method, which are increased, compared the linear method used commonly for component concentration measurement.

  1. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

    PubMed Central

    Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.

    2009-01-01

    In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083

  2. PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.

    PubMed

    Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude

    2015-04-27

    Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.

  3. Comparative study of novel versus conventional two-wavelength spectrophotometric methods for analysis of spectrally overlapping binary mixture.

    PubMed

    Lotfy, Hayam M; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom

    2015-09-05

    Smart spectrophotometric methods have been applied and validated for the simultaneous determination of a binary mixture of chloramphenicol (CPL) and prednisolone acetate (PA) without preliminary separation. Two novel methods have been developed; the first method depends upon advanced absorbance subtraction (AAS), while the other method relies on advanced amplitude modulation (AAM); in addition to the well established dual wavelength (DW), ratio difference (RD) and constant center coupled with spectrum subtraction (CC-SS) methods. Accuracy, precision and linearity ranges of these methods were determined. Moreover, selectivity was assessed by analyzing synthetic mixtures of both drugs. The proposed methods were successfully applied to the assay of drugs in their pharmaceutical formulations. No interference was observed from common additives and the validity of the methods was tested. The obtained results have been statistically compared to that of official spectrophotometric methods to give a conclusion that there is no significant difference between the proposed methods and the official ones with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. A comparative study of smart spectrophotometric methods for simultaneous determination of a skeletal muscle relaxant and an analgesic in combined dosage form

    NASA Astrophysics Data System (ADS)

    Salem, Hesham; Mohamed, Dalia

    2015-04-01

    Six simple, specific, accurate and precise spectrophotometric methods were developed and validated for the simultaneous determination of the analgesic drug; paracetamol (PARA) and the skeletal muscle relaxant; dantrolene sodium (DANT). Three methods are manipulating ratio spectra namely; ratio difference (RD), ratio subtraction (RS) and mean centering (MC). The other three methods are utilizing the isoabsorptive point either at zero order namely; absorbance ratio (AR) and absorbance subtraction (AS) or at ratio spectrum namely; amplitude modulation (AM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined. The selectivity of the developed methods was investigated by analyzing laboratory prepared mixtures of the drugs and their combined dosage form. Standard deviation values are less than 1.5 in the assay of raw materials and capsules. The obtained results were statistically compared with each other and with those of reported spectrophotometric ones. The comparison showed that there is no significant difference between the proposed methods and the reported methods regarding both accuracy and precision.

  5. A new linear least squares method for T1 estimation from SPGR signals with multiple TRs

    NASA Astrophysics Data System (ADS)

    Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J.; Pierpaoli, Carlo

    2009-02-01

    The longitudinal relaxation time, T1, can be estimated from two or more spoiled gradient recalled echo x (SPGR) images with two or more flip angles and one or more repetition times (TRs). The function relating signal intensity and the parameters are nonlinear; T1 maps can be computed from SPGR signals using nonlinear least squares regression. A widely-used linear method transforms the nonlinear model by assuming a fixed TR in SPGR images. This constraint is not desirable since multiple TRs are a clinically practical way to reduce the total acquisition time, to satisfy the required resolution, and/or to combine SPGR data acquired at different times. A new linear least squares method is proposed using the first order Taylor expansion. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy and precision of the estimated T1 from the proposed linear and the nonlinear methods. We show that the new linear least squares method provides T1 estimates comparable in both precision and accuracy to those from the nonlinear method, allowing multiple TRs and reducing computation time significantly.

  6. Application of Landsat 5-TM and GIS data to elk habitat studies in northern Idaho

    NASA Astrophysics Data System (ADS)

    Hayes, Stephen Gordon

    1999-12-01

    An extensive geographic information system (GIS) database and a large radiotelemetry sample of elk (n = 153) were used to study habitat use and selection differences between cow and bull elk (Cervus elaphus) in the Coeur d'Alene Mountains of Idaho. Significant sex differences in 40 ha area use, and interactive effects of sex and season on selection of 40 ha areas from home ranges were found. In all seasons, bulls used habitats with more closed canopy forest, more hiding cover, and less shrub and graminoid cover, than cows. Cows selected areas with shrub and graminoid cover in winter and avoided areas with closed canopy forest and hiding cover in winter and summer seasons. Both sexes selected 40 ha areas of unfragmented hiding cover and closed canopy forest during the hunting season. Bulls also avoided areas with high open road densities during the rut and hunting season. These results support present elk management recommendations, but our observations of sexual segregation provide biologists with an opportunity to refine habitat management plans to target bulls and cows specifically. Furthermore, the results demonstrate that hiding cover and canopy closure can be accurately estimated from Landsat 5-TM imagery and GIS soil data at a scale and resolution to which elk respond. As a result, our habitat mapping methods can be applied to large areas of private and public land with consistent, cost-efficient results. Non-Lambertian correction models of Landsat 5-TM imagery were compared to an uncorrected image to determine if topographic normalization increased the accuracy of elk habitat maps of forest structure in northern Idaho. The non-Lambertian models produced elk habitat maps with overall and kappa statistic accuracies as much as 21.3% higher (p < 0.0192) than the uncorrected image. Log-linear models and power analysis were used to study the dependence of commission and omission error rates on topographic normalization, vegetation type, and solar incidence angle. Examination of Type I and Type II likelihood ratio test error rates indicated that topographic normalization increased accuracy in sapling/pole closed forest, clearcuts, open forest, and shrubfields. Non-Lambertian models that allowed the Minnaert constant (k) to vary as a function of solar incidence and vegetation type offered no improvement in accuracy over the non-Lambertian model with k estimated for each TM band. The bias of habitat use proportion estimates, derived from the most accurate map, was quantified and the applicability of the non-Lambertian model to elk habitat mapping is discussed.

  7. Does clinical pretest probability influence image quality and diagnostic accuracy in dual-source coronary CT angiography?

    PubMed

    Thomas, Christoph; Brodoefel, Harald; Tsiflikas, Ilias; Bruckner, Friederike; Reimann, Anja; Ketelsen, Dominik; Drosch, Tanja; Claussen, Claus D; Kopp, Andreas; Heuschmid, Martin; Burgstahler, Christof

    2010-02-01

    To prospectively evaluate the influence of the clinical pretest probability assessed by the Morise score onto image quality and diagnostic accuracy in coronary dual-source computed tomography angiography (DSCTA). In 61 patients, DSCTA and invasive coronary angiography were performed. Subjective image quality and accuracy for stenosis detection (>50%) of DSCTA with invasive coronary angiography as gold standard were evaluated. The influence of pretest probability onto image quality and accuracy was assessed by logistic regression and chi-square testing. Correlations of image quality and accuracy with the Morise score were determined using linear regression. Thirty-eight patients were categorized into the high, 21 into the intermediate, and 2 into the low probability group. Accuracies for the detection of significant stenoses were 0.94, 0.97, and 1.00, respectively. Logistic regressions and chi-square tests showed statistically significant correlations between Morise score and image quality (P < .0001 and P < .001) and accuracy (P = .0049 and P = .027). Linear regression revealed a cutoff Morise score for a good image quality of 16 and a cutoff for a barely diagnostic image quality beyond the upper Morise scale. Pretest probability is a weak predictor of image quality and diagnostic accuracy in coronary DSCTA. A sufficient image quality for diagnostic images can be reached with all pretest probabilities. Therefore, coronary DSCTA might be suitable also for patients with a high pretest probability. Copyright 2010 AUR. Published by Elsevier Inc. All rights reserved.

  8. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    USDA-ARS?s Scientific Manuscript database

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  9. Static headspace gas chromatographic method for quantitative determination of residual solvents in pharmaceutical drug substances according to european pharmacopoeia requirements.

    PubMed

    Otero, Raquel; Carrera, Guillem; Dulsat, Joan Francesc; Fábregas, José Luís; Claramunt, Juan

    2004-11-19

    A static headspace (HS) gas chromatographic method for quantitative determination of residual solvents in a drug substance has been developed according to European Pharmacopoeia general procedure. A water-dimethylformamide mixture is proposed as sample solvent to obtain good sensitivity and recovery. The standard addition technique with internal standard quantitation was used for ethanol, tetrahydrofuran and toluene determination. Validation was performed within the requirements of ICH validation guidelines Q2A and Q2B. Selectivity was tested for 36 solvents, and system suitability requirements described in the European Pharmacopoeia were checked. Limits of detection and quantitation, precision, linearity, accuracy, intermediate precision and robustness were determined, and excellent results were obtained.

  10. Selection of a seventh spectral band for the LANDSAT-D thematic mapper

    NASA Technical Reports Server (NTRS)

    Holmes, Q. A. (Principal Investigator); Nuesch, D. R.

    1978-01-01

    The author has identified the following significant results. Each of the candidate bands were examined in terms of the feasibility of gathering high quality imagery from space while taking into account solar illumination, atmospheric attenuation, and the signal/noise ratio achievable within the TM sensor constraints. For the 2.2 micron region and the thermal IR region, inband signal values were calculated from representative spectral reflectance/emittance curves and a linear discriminant analysis was employed to predict classification accuracies. Based upon the substantial improvement (from 78 t0 92%) in discriminating zones of hydrothermally altered rocks from unaltered zones, over a broad range of observation conditions, a 2.08-2.35 micron spectral band having a ground resolution of 30 meters was recommended.

  11. Machine learning based detection of age-related macular degeneration (AMD) and diabetic macular edema (DME) from optical coherence tomography (OCT) images

    PubMed Central

    Wang, Yu; Zhang, Yaonan; Yao, Zhaomin; Zhao, Ruixue; Zhou, Fengfeng

    2016-01-01

    Non-lethal macular diseases greatly impact patients’ life quality, and will cause vision loss at the late stages. Visual inspection of the optical coherence tomography (OCT) images by the experienced clinicians is the main diagnosis technique. We proposed a computer-aided diagnosis (CAD) model to discriminate age-related macular degeneration (AMD), diabetic macular edema (DME) and healthy macula. The linear configuration pattern (LCP) based features of the OCT images were screened by the Correlation-based Feature Subset (CFS) selection algorithm. And the best model based on the sequential minimal optimization (SMO) algorithm achieved 99.3% in the overall accuracy for the three classes of samples. PMID:28018716

  12. A scalable method for computing quadruplet wave-wave interactions

    NASA Astrophysics Data System (ADS)

    Van Vledder, Gerbrant

    2017-04-01

    Non-linear four-wave interactions are a key physical process in the evolution of wind generated ocean waves. The present generation operational wave models use the Discrete Interaction Approximation (DIA), but it accuracy is poor. It is now generally acknowledged that the DIA should be replaced with a more accurate method to improve predicted spectral shapes and derived parameters. The search for such a method is challenging as one should find a balance between accuracy and computational requirements. Such a method is presented here in the form of a scalable and adaptive method that can mimic both the time consuming exact Snl4 approach and the fast but inaccurate DIA, and everything in between. The method provides an elegant approach to improve the DIA, not by including more arbitrarily shaped wave number configurations, but by a mathematically consistent reduction of an exact method, viz. the WRT method. The adaptiveness is to adapt the abscissa of the locus integrand in relation to the magnitude of the known terms. The adaptiveness is extended to the highest level of the WRT method to select interacting wavenumber configurations in a hierarchical way in relation to their importance. This adaptiveness results in a speed-up of one to three orders of magnitude depending on the measure of accuracy. This definition of accuracy should not be expressed in terms of the quality of the transfer integral for academic spectra but rather in terms of wave model performance in a dynamic run. This has consequences for the balance between the required accuracy and the computational workload for evaluating these interactions. The performance of the scalable method on different scales is illustrated with results from academic spectra, simple growth curves to more complicated field cases using a 3G-wave model.

  13. Accuracy of CT-based attenuation correction in PET/CT bone imaging

    NASA Astrophysics Data System (ADS)

    Abella, Monica; Alessio, Adam M.; Mankoff, David A.; MacDonald, Lawrence R.; Vaquero, Juan Jose; Desco, Manuel; Kinahan, Paul E.

    2012-05-01

    We evaluate the accuracy of scaling CT images for attenuation correction of PET data measured for bone. While the standard tri-linear approach has been well tested for soft tissues, the impact of CT-based attenuation correction on the accuracy of tracer uptake in bone has not been reported in detail. We measured the accuracy of attenuation coefficients of bovine femur segments and patient data using a tri-linear method applied to CT images obtained at different kVp settings. Attenuation values at 511 keV obtained with a 68Ga/68Ge transmission scan were used as a reference standard. The impact of inaccurate attenuation images on PET standardized uptake values (SUVs) was then evaluated using simulated emission images and emission images from five patients with elevated levels of FDG uptake in bone at disease sites. The CT-based linear attenuation images of the bovine femur segments underestimated the true values by 2.9 ± 0.3% for cancellous bone regardless of kVp. For compact bone the underestimation ranged from 1.3% at 140 kVp to 14.1% at 80 kVp. In the patient scans at 140 kVp the underestimation was approximately 2% averaged over all bony regions. The sensitivity analysis indicated that errors in PET SUVs in bone are approximately proportional to errors in the estimated attenuation coefficients for the same regions. The variability in SUV bias also increased approximately linearly with the error in linear attenuation coefficients. These results suggest that bias in bone uptake SUVs of PET tracers ranges from 2.4% to 5.9% when using CT scans at 140 and 120 kVp for attenuation correction. Lower kVp scans have the potential for considerably more error in dense bone. This bias is present in any PET tracer with bone uptake but may be clinically insignificant for many imaging tasks. However, errors from CT-based attenuation correction methods should be carefully evaluated if quantitation of tracer uptake in bone is important.

  14. Effect of conductance linearity and multi-level cell characteristics of TaOx-based synapse device on pattern recognition accuracy of neuromorphic system

    NASA Astrophysics Data System (ADS)

    Sung, Changhyuck; Lim, Seokjae; Kim, Hyungjun; Kim, Taesu; Moon, Kibong; Song, Jeonghwan; Kim, Jae-Joon; Hwang, Hyunsang

    2018-03-01

    To improve the classification accuracy of an image data set (CIFAR-10) by using analog input voltage, synapse devices with excellent conductance linearity (CL) and multi-level cell (MLC) characteristics are required. We analyze the CL and MLC characteristics of TaOx-based filamentary resistive random access memory (RRAM) to implement the synapse device in neural network hardware. Our findings show that the number of oxygen vacancies in the filament constriction region of the RRAM directly controls the CL and MLC characteristics. By adopting a Ta electrode (instead of Ti) and the hot-forming step, we could form a dense conductive filament. As a result, a wide range of conductance levels with CL is achieved and significantly improved image classification accuracy is confirmed.

  15. A minimax technique for time-domain design of preset digital equalizers using linear programming

    NASA Technical Reports Server (NTRS)

    Vaughn, G. L.; Houts, R. C.

    1975-01-01

    A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.

  16. The Effects of Q-Matrix Design on Classification Accuracy in the Log-Linear Cognitive Diagnosis Model.

    PubMed

    Madison, Matthew J; Bradshaw, Laine P

    2015-06-01

    Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other multidimensional measurement models. A priori specifications of which latent characteristics or attributes are measured by each item are a core element of the diagnostic assessment design. This item-attribute alignment, expressed in a Q-matrix, precedes and supports any inference resulting from the application of the diagnostic classification model. This study investigates the effects of Q-matrix design on classification accuracy for the log-linear cognitive diagnosis model. Results indicate that classification accuracy, reliability, and convergence rates improve when the Q-matrix contains isolated information from each measured attribute.

  17. Development of a Calibration Strip for Immunochromatographic Assay Detection Systems.

    PubMed

    Gao, Yue-Ming; Wei, Jian-Chong; Mak, Peng-Un; Vai, Mang-I; Du, Min; Pun, Sio-Hang

    2016-06-29

    With many benefits and applications, immunochromatographic (ICG) assay detection systems have been reported on a great deal. However, the existing research mainly focuses on increasing the dynamic detection range or application fields. Calibration of the detection system, which has a great influence on the detection accuracy, has not been addressed properly. In this context, this work develops a calibration strip for ICG assay photoelectric detection systems. An image of the test strip is captured by an image acquisition device, followed by performing a fuzzy c-means (FCM) clustering algorithm and maximin-distance algorithm for image segmentation. Additionally, experiments are conducted to find the best characteristic quantity. By analyzing the linear coefficient, an average value of hue (H) at 14 min is chosen as the characteristic quantity and the empirical formula between H and optical density (OD) value is established. Therefore, H, saturation (S), and value (V) are calculated by a number of selected OD values. Then, H, S, and V values are transferred to the RGB color space and a high-resolution printer is used to print the strip images on cellulose nitrate membranes. Finally, verification of the printed calibration strips is conducted by analyzing the linear correlation between OD and the spectral reflectance, which shows a good linear correlation (R² = 98.78%).

  18. A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds

    PubMed Central

    Poreba, Martyna; Goulette, François

    2015-01-01

    With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589

  19. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    PubMed

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Multi-Mode Estimation for Small Fixed Wing Unmanned Aerial Vehicle Localization Based on a Linear Matrix Inequality Approach

    PubMed Central

    Elzoghby, Mostafa; Li, Fu; Arafa, Ibrahim. I.; Arif, Usman

    2017-01-01

    Information fusion from multiple sensors ensures the accuracy and robustness of a navigation system, especially in the absence of global positioning system (GPS) data which gets degraded in many cases. A way to deal with multi-mode estimation for a small fixed wing unmanned aerial vehicle (UAV) localization framework is proposed, which depends on utilizing a Luenberger observer-based linear matrix inequality (LMI) approach. The proposed estimation technique relies on the interaction between multiple measurement modes and a continuous observer. The state estimation is performed in a switching environment between multiple active sensors to exploit the available information as much as possible, especially in GPS-denied environments. Luenberger observer-based projection is implemented as a continuous observer to optimize the estimation performance. The observer gain might be chosen by solving a Lyapunov equation by means of a LMI algorithm. Convergence is achieved by utilizing the linear matrix inequality (LMI), based on Lyapunov stability which keeps the dynamic estimation error bounded by selecting the observer gain matrix (L). Simulation results are presented for a small UAV fixed wing localization problem. The results obtained using the proposed approach are compared with a single mode Extended Kalman Filter (EKF). Simulation results are presented to demonstrate the viability of the proposed strategy. PMID:28420214

  1. UPLC-MS/MS determination of ephedrine, methylephedrine, amygdalin and glycyrrhizic acid in Beagle plasma and its application to a pharmacokinetic study after oral administration of Ma Huang Tang.

    PubMed

    Yan, Tianhua; Fu, Qiang; Wang, Jing; Ma, Shiping

    2015-02-01

    An ultra performance liquid chromatography-mass spectrometric (UPLC-MS) method was developed to investigate the pharmacokinetic properties of ephedrine, methylephedrine, amygdalin, and glycyrrhizic acid after oral gavage of Ma Huang Tang (MHT) in Beagles. Beagle plasma samples were pretreated using liquid-liquid extraction, and chromatographic separation was performed on a C18 column using a linear gradient of water-formic acid mixture (0.1%). The pharmacokinetic parameters of ephedrine, methylephedrine, amygdalin, and glycyrrhizic acid from MHT in Beagles were quantitatively determined by UPLC with tandem mass detector. The qualitative detection of the four compounds was accomplished by selected ion monitoring in negative/positive ion modes electrospray ionization-mass spectrometry (ESI-MS). Detection was based on multiple reaction monitoring with the precursor-to-product ion transitions m/z 166.096-116.983 (ephedrine), m/z 179.034-146.087 (methylephedrine), m/z 456.351-323.074 (amygdalin), and m/z 821.606-351.062 (glycyrrhizic acid). The selectivity, sensitivity, linearity, accuracy, precision, extraction recovery, ion suppression, and stability were within the acceptable ranges. The method described was successfully applied to reveal the pharmacokinetic properties of ephedrine, methylephedrine, amygdalin, and glycyrrhizic acid after oral gavage of MHT in Beagles. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Comprehensive Gas-Phase Peptide Ion Structure Studies Using Ion Mobility Techniques: Part 2. Gas-Phase Hydrogen/Deuterium Exchange for Ion Population Estimation.

    PubMed

    Khakinejad, Mahdiar; Ghassabi Kondalaji, Samaneh; Tafreshian, Amirmahdi; Valentine, Stephen J

    2017-05-01

    Gas-phase hydrogen/deuterium exchange (HDX) using D 2 O reagent and collision cross-section (CCS) measurements are utilized to monitor the ion conformers of the model peptide acetyl-PAAAAKAAAAKAAAAKAAAAK. The measurements are carried out on a home-built ion mobility instrument coupled to a linear ion trap mass spectrometer containing electron transfer dissociation (ETD) capabilities. ETD is utilized to obtain per-residue deuterium uptake data for select ion conformers, and a new algorithm is presented for interpreting the HDX data. Using molecular dynamics (MD) production data and a hydrogen accessibility scoring (HAS)-number of effective collisions (NEC) model, hypothetical HDX behavior is attributed to various in-silico candidate (CCS match) structures. The HAS-NEC model is applied to all candidate structures, and non-negative linear regression is employed to determine structure contributions resulting in the best match to deuterium uptake. The accuracy of the HAS-NEC model is tested with the comparison of predicted and experimental isotopic envelopes for several of the observed c-ions. It is proposed that gas-phase HDX can be utilized effectively as a second criterion (after CCS matching) for filtering suitable MD candidate structures. In this study, the second step of structure elucidation, 13 nominal structures were selected (from a pool of 300 candidate structures) and each with a population contribution proposed for these ions. Graphical Abstract ᅟ.

  3. Thin layer chromatography-densitometric determination of some non-sedating antihistamines in combination with pseudoephedrine or acetaminophen in synthetic mixtures and in pharmaceutical formulations.

    PubMed

    El-Kommos, Michael E; El-Gizawy, Samia M; Atia, Noha N; Hosny, Noha M

    2014-03-01

    The combination of certain non-sedating antihistamines (NSA) such as fexofenadine (FXD), ketotifen (KET) and loratadine (LOR) with pseudoephedrine (PSE) or acetaminophen (ACE) is widely used in the treatment of allergic rhinitis, conjunctivitis and chronic urticaria. A rapid, simple, selective and precise densitometric method was developed and validated for simultaneous estimation of six synthetic binary mixtures and their pharmaceutical dosage forms. The method employed thin layer chromatography aluminum plates precoated with silica gel G 60 F254 as the stationary phase. The mobile phases chosen for development gave compact bands for the mixtures FXD-PSE (I), KET-PSE (II), LOR-PSE (III), FXD-ACE (IV), KET-ACE (V) and LOR-ACE (VI) [Retardation factor (Rf ) values were (0.20, 0.32), (0.69, 0.34), (0.79, 0.13), (0.36, 0.70), (0.51, 0.30) and (0.76, 0.26), respectively]. Spectrodensitometric scanning integration was performed at 217, 218, 218, 233, 272 and 251 nm for the mixtures I-VI, respectively. The linear regression data for the calibration plots showed an excellent linear relationship. The method was validated for precision, accuracy, robustness and recovery. Limits of detection and quantitation were calculated. Statistical analysis proved that the method is reproducible and selective for the simultaneous estimation of these binary mixtures. Copyright © 2013 John Wiley & Sons, Ltd.

  4. The best alternative for estimating reference crop evapotranspiration in different sub-regions of mainland China.

    PubMed

    Peng, Lingling; Li, Yi; Feng, Hao

    2017-07-14

    Reference crop evapotranspiration (ET o ) is a critically important parameter for climatological, hydrological and agricultural management. The FAO56 Penman-Monteith (PM) equation has been recommended as the standardized ET o (ET o,s ) equation, but it has a high requirements of climatic data. There is a practical need for finding a best alternative method to estimate ET o in the regions where full climatic data are lacking. A comprehensive comparison for the spatiotemporal variations, relative errors, standard deviations and Nash-Sutcliffe efficacy coefficients of monthly or annual ET o,s and ET o,i (i = 1, 2, …, 10) values estimated by 10 selected methods (i.e., Irmak et al., Makkink, Priestley-Taylor, Hargreaves-Samani, Droogers-Allen, Berti et al., Doorenbos-Pruitt, Wright and Valiantzas, respectively) using data at 552 sites over 1961-2013 in mainland China. The method proposed by Berti et al. (2014) was selected as the best alternative of FAO56-PM because it was simple in computation process, only utilized temperature data, had generally good accuracy in describing spatiotemporal characteristics of ET o,s in different sub-regions and mainland China, and correlated linearly to the FAO56-PM method very well. The parameters of the linear correlations between ET o of the two methods are calibrated for each site with the smallest determination of coefficient being 0.87.

  5. Development and validation of a reversed-phase high-performance thin-layer chromatography-densitometric method for determination of atorvastatin calcium in bulk drug and tablets.

    PubMed

    Shirkhedkar, Atul A; Surana, Sanjay J

    2010-01-01

    Atorvastatin calcium is a synthetic HMG-CoA reductase inhibitor that is used as a cholesterol-lowering agent. A simple, sensitive, selective, and precise RP-HPTLC-densitometric determination of atorvastatin calcium both as bulk drug and from pharmaceutical formulation was developed and validated according to International Conference on Harmonization guidelines. The method used aluminum sheets precoated with silica gel 60 RP18F254S as the stationary phase, and the mobile phase consisted of methanol-water (3.5 + 1.5, v/v). The system gave a compact band for atorvastatin calcium with an Rf value of 0.62 +/- 0.02. Densitometric quantification was carried out at 246 nm. The linear regression analysis data for the calibration plots showed a good linear relationship with r = 0.9992 in the working concentration range of 100-800 ng/band. The method was validated for precision, accuracy, ruggedness, robustness, specificity, recovery, LOD, and LOQ. The LOD and LOQ were 6 and 18 ng, respectively. The drug underwent hydrolysis when subjected to acidic conditions and was found to be stable under alkali, oxidation, dry heat, and photodegradation conditions. Statistical analysis proved that the developed RP-HPTLC-densitometry method is reproducible and selective and that it can be applied for identification and quantitative determination of atorvastatin calcium in bulk drug and tablet formulation.

  6. Analysis of Slope Limiters on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Berger, Marsha; Aftosmis, Michael J.

    2005-01-01

    This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. Many slope limiters in standard use do not preserve linear solutions on irregular grids impacting both accuracy and convergence. We rewrite some well-known limiters to highlight their underlying symmetry, and use this form to examine the proper - ties of both traditional and novel limiter formulations on non-uniform meshes. A consistent method of handling stretched meshes is developed which is both linearity preserving for arbitrary mesh stretchings and reduces to common limiters on uniform meshes. In multiple dimensions we analyze the monotonicity region of the gradient vector and show that the multidimensional limiting problem may be cast as the solution of a linear programming problem. For some special cases we present a new directional limiting formulation that preserves linear solutions in multiple dimensions on irregular grids. Computational results using model problems and complex three-dimensional examples are presented, demonstrating accuracy, monotonicity and robustness.

  7. Identification, preparation and UHPLC determination of process-related impurity in zolmitriptan.

    PubMed

    Douša, Michal; Gibala, Petr; Rádl, Stanislav; Klecán, Ondřej; Mandelová, Zuzana; Břicháč, Jiří; Pekárek, Tomáš

    2012-01-25

    A new impurity was detected and determined using gradient ion-pair UHPLC method with UV detection in zolmitriptan (ZOL). Using MS, NMR and IR study the impurity was identified as (4S,4'S)-4,4'-(2,2'-(4-(dimethylamino)butane-1,1-diyl)bis(3-(2-(dimethylamino) ethyl)-1H-indole-5,2-diyl))bis(methylene)di(oxazolidin-2-one) (ZOL-dimer). The standard of ZOL-dimer was consequently prepared via organic synthesis followed by semipreparative HPLC purification. The UHPLC method was optimized in order to selectively detect and quantify other known and unknown process-related impurities and degradation products of ZOL as well. The presented method which was validated with respect to linearity, accuracy, precision and selectivity has an advantage of a very quick UHPLC chromatographic separation (less than 7 min including re-equilibration time) and therefore is highly suitable for routine analysis of related substances and stability studies of ZOL. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Direct injection analysis of fatty and resin acids in papermaking process waters by HPLC/MS.

    PubMed

    Valto, Piia; Knuutinen, Juha; Alén, Raimo

    2011-04-01

    A novel HPLC-atmospheric pressure chemical ionization/MS (HPLC-APCI/MS) method was developed for the rapid analysis of selected fatty and resin acids typically present in papermaking process waters. A mixture of palmitic, stearic, oleic, linolenic, and dehydroabietic acids was separated by a commercial HPLC column (a modified stationary C(18) phase) using gradient elution with methanol/0.15% formic acid (pH 2.5) as a mobile phase. The internal standard (myristic acid) method was used to calculate the correlation coefficients and in the quantitation of the results. In the thorough quality parameters measurement, a mixture of these model acids in aqueous media as well as in six different paper machine process waters was quantitatively determined. The measured quality parameters, such as selectivity, linearity, precision, and accuracy, clearly indicated that, compared with traditional gas chromatographic techniques, the simple method developed provided a faster chromatographic analysis with almost real-time monitoring of these acids. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. An automatic multi-atlas prostate segmentation in MRI using a multiscale representation and a label fusion strategy

    NASA Astrophysics Data System (ADS)

    Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo

    2015-01-01

    The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.

  10. Realisation Of Polarisation Sensitive And Frequency Selective Surfaces On Microwave Reflectors By Laser Evaporation

    NASA Astrophysics Data System (ADS)

    Halm, R.; Kupper, Th.; Fischer, A.

    1987-01-01

    Gridded reflectors are used on communication satellites antennas to provide frequency reuse in dual linear polarisation mode of operation. The polarisation sensitive surface consists of metallic strips, forming a grid with width and spacings of the order of 0.1 mm. The use of frequency-selective surface (FSS) subreflectors allows the simultaneous generation of different microwave beams with the same main reflector. Such a reflector will require a structure of conductive arrays of either dipoles, rings, squares or square loops with typical dimensions of the order of 3-6 mm. Optimisation of the electrical design leads to critical dimensioning of these structures. By direct ablation of an aluminium surface coating by means of laser evaporation, high accuracies can be achieved. The major requirements were to minimize thermal damage of the substrate material and to produce dimensionally accurate grids. Experiments were carried out using a pulsed TEA-CO2 laser and a Q-switched Alexandrite laser. Details of the experimental set-up and conditions are described.

  11. 40 CFR 63.8 - Monitoring requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... with conducting performance tests under § 63.7. Verification of operational status shall, at a minimum... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...

  12. 40 CFR 63.8 - Monitoring requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... with conducting performance tests under § 63.7. Verification of operational status shall, at a minimum... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...

  13. Representing Lumped Markov Chains by Minimal Polynomials over Field GF(q)

    NASA Astrophysics Data System (ADS)

    Zakharov, V. M.; Shalagin, S. V.; Eminov, B. F.

    2018-05-01

    A method has been proposed to represent lumped Markov chains by minimal polynomials over a finite field. The accuracy of representing lumped stochastic matrices, the law of lumped Markov chains depends linearly on the minimum degree of polynomials over field GF(q). The method allows constructing the realizations of lumped Markov chains on linear shift registers with a pre-defined “linear complexity”.

  14. Variance Component Selection With Applications to Microbiome Taxonomic Data.

    PubMed

    Zhai, Jing; Kim, Juhyun; Knox, Kenneth S; Twigg, Homer L; Zhou, Hua; Zhou, Jin J

    2018-01-01

    High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator) penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV) infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.

  15. Prediction of Undsteady Flows in Turbomachinery Using the Linearized Euler Equations on Deforming Grids

    NASA Technical Reports Server (NTRS)

    Clark, William S.; Hall, Kenneth C.

    1994-01-01

    A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.

  16. A comparison of different chemometrics approaches for the robust classification of electronic nose data.

    PubMed

    Gromski, Piotr S; Correa, Elon; Vaughan, Andrew A; Wedge, David C; Turner, Michael L; Goodacre, Royston

    2014-11-01

    Accurate detection of certain chemical vapours is important, as these may be diagnostic for the presence of weapons, drugs of misuse or disease. In order to achieve this, chemical sensors could be deployed remotely. However, the readout from such sensors is a multivariate pattern, and this needs to be interpreted robustly using powerful supervised learning methods. Therefore, in this study, we compared the classification accuracy of four pattern recognition algorithms which include linear discriminant analysis (LDA), partial least squares-discriminant analysis (PLS-DA), random forests (RF) and support vector machines (SVM) which employed four different kernels. For this purpose, we have used electronic nose (e-nose) sensor data (Wedge et al., Sensors Actuators B Chem 143:365-372, 2009). In order to allow direct comparison between our four different algorithms, we employed two model validation procedures based on either 10-fold cross-validation or bootstrapping. The results show that LDA (91.56% accuracy) and SVM with a polynomial kernel (91.66% accuracy) were very effective at analysing these e-nose data. These two models gave superior prediction accuracy, sensitivity and specificity in comparison to the other techniques employed. With respect to the e-nose sensor data studied here, our findings recommend that SVM with a polynomial kernel should be favoured as a classification method over the other statistical models that we assessed. SVM with non-linear kernels have the advantage that they can be used for classifying non-linear as well as linear mapping from analytical data space to multi-group classifications and would thus be a suitable algorithm for the analysis of most e-nose sensor data.

  17. The Accuracy and Reproducibility of Linear Measurements Made on CBCT-derived Digital Models.

    PubMed

    Maroua, Ahmad L; Ajaj, Mowaffak; Hajeer, Mohammad Y

    2016-04-01

    To evaluate the accuracy and reproducibility of linear measurements made on cone-beam computed tomography (CBCT)-derived digital models. A total of 25 patients (44% female, 18.7 ± 4 years) who had CBCT images for diagnostic purposes were included. Plaster models were obtained and digital models were extracted from CBCT scans. Seven linear measurements from predetermined landmarks were measured and analyzed on plaster models and the corresponding digital models. The measurements included arch length and width at different sites. Paired t test and Bland-Altman analysis were used to evaluate the accuracy of measurements on digital models compared to the plaster models. Also, intraclass correlation coefficients (ICCs) were used to evaluate the reproducibility of the measurements in order to assess the intraobserver reliability. The statistical analysis showed significant differences on 5 out of 14 variables, and the mean differences ranged from -0.48 to 0.51 mm. The Bland-Altman analysis revealed that the mean difference between variables was (0.14 ± 0.56) and (0.05 ± 0.96) mm and limits of agreement between the two methods ranged from -1.2 to 0.96 and from -1.8 to 1.9 mm in the maxilla and the mandible, respectively. The intraobserver reliability values were determined for all 14 variables of two types of models separately. The mean ICC value for the plaster models was 0.984 (0.924-0.999), while it was 0.946 for the CBCT models (range from 0.850 to 0.985). Linear measurements obtained from the CBCT-derived models appeared to have a high level of accuracy and reproducibility.

  18. Genomic selection accuracies within and between environments and small breeding groups in white spruce.

    PubMed

    Beaulieu, Jean; Doerksen, Trevor K; MacKay, John; Rainville, André; Bousquet, Jean

    2014-12-02

    Genomic selection (GS) may improve selection response over conventional pedigree-based selection if markers capture more detailed information than pedigrees in recently domesticated tree species and/or make it more cost effective. Genomic prediction accuracies using 1748 trees and 6932 SNPs representative of as many distinct gene loci were determined for growth and wood traits in white spruce, within and between environments and breeding groups (BG), each with an effective size of Ne ≈ 20. Marker subsets were also tested. Model fits and/or cross-validation (CV) prediction accuracies for ridge regression (RR) and the least absolute shrinkage and selection operator models approached those of pedigree-based models. With strong relatedness between CV sets, prediction accuracies for RR within environment and BG were high for wood (r = 0.71-0.79) and moderately high for growth (r = 0.52-0.69) traits, in line with trends in heritabilities. For both classes of traits, these accuracies achieved between 83% and 92% of those obtained with phenotypes and pedigree information. Prediction into untested environments remained moderately high for wood (r ≥ 0.61) but dropped significantly for growth (r ≥ 0.24) traits, emphasizing the need to phenotype in all test environments and model genotype-by-environment interactions for growth traits. Removing relatedness between CV sets sharply decreased prediction accuracies for all traits and subpopulations, falling near zero between BGs with no known shared ancestry. For marker subsets, similar patterns were observed but with lower prediction accuracies. Given the need for high relatedness between CV sets to obtain good prediction accuracies, we recommend to build GS models for prediction within the same breeding population only. Breeding groups could be merged to build genomic prediction models as long as the total effective population size does not exceed 50 individuals in order to obtain high prediction accuracy such as that obtained in the present study. A number of markers limited to a few hundred would not negatively impact prediction accuracies, but these could decrease more rapidly over generations. The most promising short-term approach for genomic selection would likely be the selection of superior individuals within large full-sib families vegetatively propagated to implement multiclonal forestry.

  19. A new approach for assessment of wear in metal-backed acetabular cups using computed tomography: a phantom study with retrievals.

    PubMed

    Jedenmalm, Anneli; Noz, Marilyn E; Olivecrona, Henrik; Olivecrona, Lotta; Stark, Andre

    2008-04-01

    Polyethylene wear is an important cause of aseptic loosening in hip arthroplasty. Detection of significant wear usually happens late on, since available diagnostic techniques are either not sensitive enough or too complicated and expensive for routine use. This study evaluates a new approach for measurement of linear wear of metal-backed acetabular cups using CT as the intended clinically feasible method. 8 retrieved uncemented metal-backed acetabular cups were scanned twice ex vivo using CT. The linear penetration depth of the femoral head into the cup was measured in the CT volumes using dedicated software. Landmark points were placed on the CT images of cup and head, and also on a reference plane in order to calculate the wear vector magnitude and angle to one of the axes. A coordinate-measuring machine was used to test the accuracy of the proposed CT method. For this purpose, the head diameters were also measured by both methods. Accuracy of the CT method for linear wear measurements was 0.6 mm and wear vector angle was 27 degrees . No systematic difference was found between CT scans. This study on explanted acetabular cups shows that CT is capable of reliable measurement of linear wear in acetabular cups at a clinically relevant level of accuracy. It was also possible to use the method for assessment of direction of wear.

  20. Linear combination methods to improve diagnostic/prognostic accuracy on future observations

    PubMed Central

    Kang, Le; Liu, Aiyi; Tian, Lili

    2014-01-01

    Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714

  1. Analytic solutions to modelling exponential and harmonic functions using Chebyshev polynomials: fitting frequency-domain lifetime images with photobleaching.

    PubMed

    Malachowski, George C; Clegg, Robert M; Redford, Glen I

    2007-12-01

    A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.

  2. A Simple RP-HPLC Method for Quantitation of Itopride HCl in Tablet Dosage Form.

    PubMed

    Thiruvengada, Rajan Vs; Mohamed, Saleem Ts; Ramkanth, S; Alagusundaram, M; Ganaprakash, K; Madhusudhana, Chetty C

    2010-10-01

    An isocratic reversed phase high-performance liquid chromatographic method with ultraviolet detection at 220 nm has been developed for the quantification of itopride hydrochloride in tablet dosage form. The quantification was carried out using C(8) column (250 mm × 4.6 mm), 5-μm particle size SS column. The mobile phase comprised of two solvents (Solvent A: buffer 1.4 mL ortho-phosphoric acid adjusted to pH 3.0 with triethyl amine and Solvent B: acetonitrile). The ratio of Solvent A: Solvent B was 75:25 v/v. The flow rate was 1.0 mL (-1)with UV detection at 220 nm. The method has been validated and proved to be robust. The calibration curve was linear in the concentration range of 80-120% with coefficient of correlation 0.9995. The percentage recovery for itopride HCl was 100.01%. The proposed method was validated for its selectivity, linearity, accuracy, and precision. The method was found to be suitable for the quality control of itopride HCl in tablet dosage formulation.

  3. A Simple RP-HPLC Method for Quantitation of Itopride HCl in Tablet Dosage Form

    PubMed Central

    Thiruvengada, Rajan VS; Mohamed, Saleem TS; Ramkanth, S; Alagusundaram, M; Ganaprakash, K; Madhusudhana, Chetty C

    2010-01-01

    An isocratic reversed phase high-performance liquid chromatographic method with ultraviolet detection at 220 nm has been developed for the quantification of itopride hydrochloride in tablet dosage form. The quantification was carried out using C8 column (250 mm × 4.6 mm), 5-μm particle size SS column. The mobile phase comprised of two solvents (Solvent A: buffer 1.4 mL ortho-phosphoric acid adjusted to pH 3.0 with triethyl amine and Solvent B: acetonitrile). The ratio of Solvent A: Solvent B was 75:25 v/v. The flow rate was 1.0 mL -1with UV detection at 220 nm. The method has been validated and proved to be robust. The calibration curve was linear in the concentration range of 80-120% with coefficient of correlation 0.9995. The percentage recovery for itopride HCl was 100.01%. The proposed method was validated for its selectivity, linearity, accuracy, and precision. The method was found to be suitable for the quality control of itopride HCl in tablet dosage formulation. PMID:21264104

  4. On-line sample cleanup and enrichment chromatographic technique for the determination of ambroxol in human serum.

    PubMed

    Emara, Samy; Kamal, Maha; Abdel Kawi, Mohamed

    2012-02-01

    A sensitive and efficient on-line clean up and pre-concentration method has been developed using column-switching technique and protein-coated µ-Bondapak CN silica pre-column for quantification of ambroxol (AM) in human serum. The method is performed by direct injection of serum sample onto a protein-coated µ-Bondapak CN silica pre-column, where AM is pre-concentrated and retained, while proteins and very polar constituents are washed to waste using a phosphate buffer saline (pH 7.4). The retained analyte on the pre-column is directed onto a C(18) analytical column for separation, with a mobile phase consisting of a mixture of methanol and distilled deionized water (containing 1% triethylamine adjusted to pH 3.5 with ortho-phosphoric acid) in the ratio of 50:50 (v/v). Detection is performed at 254 nm. The calibration curve is linear over the concentration range of 12-120 ng/mL (r(2) = 0.9995). The recovery, selectivity, linearity, precision, and accuracy of the method are convenient for pharmacokinetic studies or routine assays.

  5. On the Development of an Efficient Parallel Hybrid Solver with Application to Acoustically Treated Aero-Engine Nacelles

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj

    2006-01-01

    A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.

  6. Computer simulation of two-dimensional unsteady flows in estuaries and embayments by the method of characteristics : basic theory and the formulation of the numerical method

    USGS Publications Warehouse

    Lai, Chintu

    1977-01-01

    Two-dimensional unsteady flows of homogeneous density in estuaries and embayments can be described by hyperbolic, quasi-linear partial differential equations involving three dependent and three independent variables. A linear combination of these equations leads to a parametric equation of characteristic form, which consists of two parts: total differentiation along the bicharacteristics and partial differentiation in space. For its numerical solution, the specified-time-interval scheme has been used. The unknown, partial space-derivative terms can be eliminated first by suitable combinations of difference equations, converted from the corresponding differential forms and written along four selected bicharacteristics and a streamline. Other unknowns are thus made solvable from the known variables on the current time plane. The computation is carried to the second-order accuracy by using trapezoidal rule of integration. Means to handle complex boundary conditions are developed for practical application. Computer programs have been written and a mathematical model has been constructed for flow simulation. The favorable computer outputs suggest further exploration and development of model worthwhile. (Woodard-USGS)

  7. A candidate reference method using ICP-MS for sweat chloride quantification.

    PubMed

    Collie, Jake T; Massie, R John; Jones, Oliver A H; Morrison, Paul D; Greaves, Ronda F

    2016-04-01

    The aim of the study was to develop a method for sweat chloride (Cl) quantification using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) to present to the Joint Committee for Traceability in Laboratory Medicine (JCTLM) as a candidate reference method for the diagnosis of cystic fibrosis (CF). Calibration standards were prepared from sodium chloride (NaCl) to cover the expected range of sweat Cl values. Germanium (Ge) and scandium (Sc) were selected as on-line (instrument based) internal standards (IS) and gallium (Ga) as the off-line (sample based) IS. The method was validated through linearity, accuracy and imprecision studies as well as enrolment into the Royal College of Pathologists of Australasia Quality Assurance Program (RCPAQAP) for sweat electrolyte testing. Two variations of the ICP-MS method were developed, an on-line and off-line IS, and compared. Linearity was determined up to 225 mmol/L with a limit of quantitation of 7.4 mmol/L. The off-line IS demonstrated increased accuracy through the RCPAQAP performance assessment (CV of 1.9%, bias of 1.5 mmol/L) in comparison to the on-line IS (CV of 8.0%, bias of 3.8 mmol/L). Paired t-tests confirmed no significant differences between sample means of the two IS methods (p=0.53) or from each method against the RCPAQAP target values (p=0.08 and p=0.29). Both on and off-line IS methods generated highly reproducible results and excellent linear comparison to the RCPAQAP target results. ICP-MS is a highly accurate method with a low limit of quantitation for sweat Cl analysis and should be recognised as a candidate reference method for the monitoring and diagnosis of CF. Laboratories that currently practice sweat Cl analysis using ICP-MS should include an off-line IS to help negate any pre-analytical errors.

  8. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  9. How genome complexity can explain the difficulty of aligning reads to genomes.

    PubMed

    Phan, Vinhthuy; Gao, Shanshan; Tran, Quang; Vo, Nam S

    2015-01-01

    Although it is frequently observed that aligning short reads to genomes becomes harder if they contain complex repeat patterns, there has not been much effort to quantify the relationship between complexity of genomes and difficulty of short-read alignment. Existing measures of sequence complexity seem unsuitable for the understanding and quantification of this relationship. We investigated several measures of complexity and found that length-sensitive measures of complexity had the highest correlation to accuracy of alignment. In particular, the rate of distinct substrings of length k, where k is similar to the read length, correlated very highly to alignment performance in terms of precision and recall. We showed how to compute this measure efficiently in linear time, making it useful in practice to estimate quickly the difficulty of alignment for new genomes without having to align reads to them first. We showed how the length-sensitive measures could provide additional information for choosing aligners that would align consistently accurately on new genomes. We formally established a connection between genome complexity and the accuracy of short-read aligners. The relationship between genome complexity and alignment accuracy provides additional useful information for selecting suitable aligners for new genomes. Further, this work suggests that the complexity of genomes sometimes should be thought of in terms of specific computational problems, such as the alignment of short reads to genomes.

  10. Assessment of the UV camera sulfur dioxide retrieval for point source plumes

    USGS Publications Warehouse

    Dalton, M.P.; Watson, I.M.; Nadeau, P.A.; Werner, C.; Morrow, W.; Shannon, J.M.

    2009-01-01

    Digital cameras, sensitive to specific regions of the ultra-violet (UV) spectrum, have been employed for quantifying sulfur dioxide (SO2) emissions in recent years. The instruments make use of the selective absorption of UV light by SO2 molecules to determine pathlength concentration. Many monitoring advantages are gained by using this technique, but the accuracy and limitations have not been thoroughly investigated. The effect of some user-controlled parameters, including image exposure duration, the diameter of the lens aperture, the frequency of calibration cell imaging, and the use of the single or paired bandpass filters, have not yet been addressed. In order to clarify methodological consequences and quantify accuracy, laboratory and field experiments were conducted. Images were collected of calibration cells under varying observational conditions, and our conclusions provide guidance for enhanced image collection. Results indicate that the calibration cell response is reliably linear below 1500 ppm m, but that the response is significantly affected by changing light conditions. Exposure durations that produced maximum image digital numbers above 32 500 counts can reduce noise in plume images. Sulfur dioxide retrieval results from a coal-fired power plant plume were compared to direct sampling measurements and the results indicate that the accuracy of the UV camera retrieval method is within the range of current spectrometric methods. ?? 2009 Elsevier B.V.

  11. Brain-computer interface (BCI) evaluation in people with amyotrophic lateral sclerosis.

    PubMed

    McCane, Lynn M; Sellers, Eric W; McFarland, Dennis J; Mak, Joseph N; Carmack, C Steve; Zeitlin, Debra; Wolpaw, Jonathan R; Vaughan, Theresa M

    2014-06-01

    Brain-computer interfaces (BCIs) might restore communication to people severely disabled by amyotrophic lateral sclerosis (ALS) or other disorders. We sought to: 1) define a protocol for determining whether a person with ALS can use a visual P300-based BCI; 2) determine what proportion of this population can use the BCI; and 3) identify factors affecting BCI performance. Twenty-five individuals with ALS completed an evaluation protocol using a standard 6 × 6 matrix and parameters selected by stepwise linear discrimination. With an 8-channel EEG montage, the subjects fell into two groups in BCI accuracy (chance accuracy 3%). Seventeen averaged 92 (± 3)% (range 71-100%), which is adequate for communication (G70 group). Eight averaged 12 (± 6)% (range 0-36%), inadequate for communication (L40 subject group). Performance did not correlate with disability: 11/17 (65%) of G70 subjects were severely disabled (i.e. ALSFRS-R < 5). All L40 subjects had visual impairments (e.g. nystagmus, diplopia, ptosis). P300 was larger and more anterior in G70 subjects. A 16-channel montage did not significantly improve accuracy. In conclusion, most people severely disabled by ALS could use a visual P300-based BCI for communication. In those who could not, visual impairment was the principal obstacle. For these individuals, auditory P300-based BCIs might be effective.

  12. An ultra-high performance liquid chromatography method to determine the skin penetration of an octyl methoxycinnamate-loaded liquid crystalline system.

    PubMed

    Prado, A H; Borges, M C; Eloy, J O; Peccinini, R G; Chorilli, M

    2017-10-01

    Cutaneous penetration is a critical factor in the use of sunscreen, as the compounds should not reach systemic circulation in order to avoid the induction of toxicity. The evaluation of the skin penetration and permeation of the UVB filter octyl methoxycinnamate (OMC) is essential for the development of a successful sunscreen formulation. Liquid-crystalline systems are innovative and potential carriers of OMC, which possess several advantages, including controlled release and protection of the filter from degradation. In this study, a new and effective method was developed using ultra-high performance liquid chromatography (UPLC) with ultraviolet detection (UV) for the quantitative analysis of penetration of OMC-loaded liquid crystalline systems into the skin. The following parameters were assessed in the method: selectivity, linearity, precision, accuracy, robustness, limit of detection (LOD), and limit of quantification (LOQ). The analytical curve was linear in the range from 0.25 to 250 μg.m-1, precise, with a standard deviation of 0.05-1.24%, with an accuracy in the range from 96.72 to 105.52%, and robust, with adequate values for the LOD and LOQ of 0.1 and 0.25 μg.mL -1, respectively. The method was successfully used to determine the in vitro skin permeation of OMC-loaded liquid crystalline systems. The results of the in vitro tests on Franz cells showed low cutaneous permeation and high retention of the OMC, particularly in the stratum corneum, owing to its high lipophilicity, which is desirable for a sunscreen formulation.

  13. Development of an improved sample preparation platform for acidic endogenous hormones in plant tissues using electromembrane extraction.

    PubMed

    Suh, Joon Hyuk; Han, Sang Beom; Wang, Yu

    2018-02-02

    Despite their importance in pivotal signaling pathways due to trace quantities and complex matrices, the analysis of plant hormones is a challenge. Here, to improve this issue, we present an electromembrane extraction technology combined with liquid chromatography-tandem mass spectrometry for determination of acidic plant hormones including jasmonic acid, abscisic acid, salicylic acid, benzoic acid, gibberellic acid and gibberellin A 4 in plant tissues. Factors influencing extraction efficiency, such as voltage, extraction time and stirring rate were optimized using a design of experiments. Analytical performance was evaluated in terms of specificity, linearity, limit of quantification, precision, accuracy, recovery and repeatability. The results showed good linearity (r 2  > 0.995), precision and acceptable accuracy. The limit of quantification ranged from 0.1 to 10 ng mL -1 , and the recoveries were 34.6-50.3%. The developed method was applied in citrus leaf samples, showing better clean-up efficiency, as well as higher sensitivity compared to a previous method using liquid-liquid extraction. Organic solvent consumption was minimized during the process, making it an appealing method. More noteworthy, electromembrane extraction has been scarcely applied to plant tissues, and this is the first time that major plant hormones were extracted using this technology, with high sensitivity and selectivity. Taken together, this work gives not only a novel sample preparation platform using an electric field for plant hormones, but also a good example of extracting complex plant tissues in a simple and effective way. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A validated high-performance liquid chromatographic method for the determination of moclobemide and its two metabolites in human plasma and application to pharmacokinetic studies.

    PubMed

    Plenis, Alina; Chmielewska, Aleksandra; Konieczna, Lucyna; Lamparczyk, Henryk

    2007-09-01

    A rapid and sensitive reversed-phase high-performance liquid chromatographic method (RP-HPLC) with ultraviolet detection has been developed for the determination of moclobemide and its metabolites, p-chloro-N-(-2-morpholinoethyl)benzamide N'-oxide (Ro 12-5637) and p-chloro-N-[2-(3-oxomorpholino)ethyl]-benzamide (Ro 12-8095), in human plasma. The assay was performed after single liquid-liquid extraction with dichloromethane at alkaline pH using phenacetin as the internal standard. Chromatographic separation was performed on a C(18) column using a mixture of acetonitrile and water (25:75, v/v), adjusted to pH 2.7 with ortho-phosphoric acid, as mobile phase. Spectrophotometric detection was performed at 239 nm. The method has been validated for accuracy, precision, selectivity, linearity, recovery and stability. The quantification limit for moclobemide and Ro 12-8095 was 10 ng/mL, and for Ro 12-5637 was 30 ng/mL. Linearity of the method was confirmed for the range 20-2500 ng/mL for moclobemide (r = 0.9998), 20-1750 ng/mL for Ro 12-8095 (r = 0.9996) and 30-350 ng/mL for Ro 12-5637 (r = 0.9991). Moreover, within-day and between-day precisions and accuracies of the method were established. The described method was successfully applied in pharmacokinetic studies of parent drug and its two metabolites after a single oral administration of 150 mg of moclobemide to 20 healthy volunteers. Copyright (c) 2007 John Wiley & Sons, Ltd.

  15. Comparison of UV spectrophotometry and high performance liquid chromatography methods for the determination of repaglinide in tablets.

    PubMed

    Dhole, Seema M; Khedekar, Pramod B; Amnerkar, Nikhil D

    2012-07-01

    Repaglinide is a miglitinide class of antidiabetic drug used for the treatment of type 2 diabetes mellitus. A fast and reliable method for the determination of repaglinide was highly desirable to support formulation screening and quality control. UV spectrophotometric and reversed-phase high performance liquid chromatography (RP-HPLC) methods were developed for determination of repaglinide in the tablet dosage form. The UV spectrum recorded between 200 400 nm using methanol as solvent and the wavelength 241 nm was selected for the determination of repaglinide. RP-HPLC analysis was carried out using Agilent TC-C18 (2) column and mobile phase composed of methanol and water (80:20 v/v, pH adjusted to 3.5 with orthophosphoric acid) at a flow rate of 1.0 ml/min. Parameters such as linearity, precision, accuracy, recovery, specificity and ruggedness are studied as reported in the International Conference on Harmonization (ICH) guidelines. The developed methods illustrated excellent linearity (r(2) > 0.999) in the concentration range of 5-30 μg/ml and 5-50 μg/ml for UV spectrophotometric and HPLC methods, respectively. Precision (%R.S.D < 1.50) and mean recoveries were found in the range of 99.63-100.45% for UV spectrophotometric method and 99.71-100.25% for HPLC method which shows accuracy of the methods. The developed methods were found to be reliable, simple, fast, accurate and successfully used for the quality control of repaglinide as a bulk drug and in pharmaceutical formulations.

  16. Comparison of UV spectrophotometry and high performance liquid chromatography methods for the determination of repaglinide in tablets

    PubMed Central

    Dhole, Seema M.; Khedekar, Pramod B.; Amnerkar, Nikhil D.

    2012-01-01

    Background: Repaglinide is a miglitinide class of antidiabetic drug used for the treatment of type 2 diabetes mellitus. A fast and reliable method for the determination of repaglinide was highly desirable to support formulation screening and quality control. Objective: UV spectrophotometric and reversed-phase high performance liquid chromatography (RP-HPLC) methods were developed for determination of repaglinide in the tablet dosage form. Materials and Methods: The UV spectrum recorded between 200 400 nm using methanol as solvent and the wavelength 241 nm was selected for the determination of repaglinide. RP-HPLC analysis was carried out using Agilent TC-C18 (2) column and mobile phase composed of methanol and water (80:20 v/v, pH adjusted to 3.5 with orthophosphoric acid) at a flow rate of 1.0 ml/min. Parameters such as linearity, precision, accuracy, recovery, specificity and ruggedness are studied as reported in the International Conference on Harmonization (ICH) guidelines. Results: The developed methods illustrated excellent linearity (r2 > 0.999) in the concentration range of 5-30 μg/ml and 5-50 μg/ml for UV spectrophotometric and HPLC methods, respectively. Precision (%R.S.D < 1.50) and mean recoveries were found in the range of 99.63-100.45% for UV spectrophotometric method and 99.71-100.25% for HPLC method which shows accuracy of the methods. Conclusion: The developed methods were found to be reliable, simple, fast, accurate and successfully used for the quality control of repaglinide as a bulk drug and in pharmaceutical formulations. PMID:23781481

  17. Development and validation of a rapid LC-MS/MS method for simultaneous determination of netupitant and palonosetron in human plasma and its application to a pharmacokinetic study.

    PubMed

    Xu, Mingzhen; Ni, Yang; Li, Shihong; Du, Juan; Li, Huqun; Zhou, Ying; Li, Weiyong; Chen, Hui

    2016-08-01

    A simple liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was firstly developed and validated for simultaneous determination of netupitant and palonosetron in human plasma using ibrutinib as the internal standard (IS). Following liquid-liquid extraction, the compounds were eluted isocratically on a Phenomenex C18 column (50mm×2.0mm, 3μm) with the mobile phase consisting of acetonitrile and 10mM ammonium acetate buffer (pH 9.0) (89:11, v/v) at the flow rate of 0.3mL/min. The monitored ion transitions were m/z 579.5→522.4 for netupitant, m/z 297.3→110.2 for palonosetron and m/z 441.2→138.1 for IS. Chromatographic run time was 2.5min per injection, which made it possible to analyze more than 300 of samples per day. The assay exhibited a linear dynamic range of 5-1000ng/mL for netupitant and 0.02-10ng/mL for palonosetron in plasma. The values for both within- and between-day precision and accuracy were well within the generally accepted criteria for analytical methods (<15%). Selectivity, linearity, lower limit of quantification (LLOQ), accuracy, precision, stability, matrix effect, recovery and carry-over effect were evaluated for all analytes. The method is simple, rapid, and has been applied successfully to a pharmacokinetic study of netupitant and palonosetron in healthy volunteers. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Classification of electroencephalograph signals using time-frequency decomposition and linear discriminant analysis

    NASA Astrophysics Data System (ADS)

    Szuflitowska, B.; Orlowski, P.

    2017-08-01

    Automated detection system consists of two key steps: extraction of features from EEG signals and classification for detection of pathology activity. The EEG sequences were analyzed using Short-Time Fourier Transform and the classification was performed using Linear Discriminant Analysis. The accuracy of the technique was tested on three sets of EEG signals: epilepsy, healthy and Alzheimer's Disease. The classification error below 10% has been considered a success. The higher accuracy are obtained for new data of unknown classes than testing data. The methodology can be helpful in differentiation epilepsy seizure and disturbances in the EEG signal in Alzheimer's Disease.

  19. Linearity-Preserving Limiters on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Berger, Marsha; Aftosmis, Michael; Murman, Scott

    2004-01-01

    This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. We note that on non-uniform grids the scalar formulation in standard use today sacrifices k-exactness, even for linear solutions, impacting both accuracy and convergence. We rewrite some well-known limiters in a n way to highlight their underlying symmetry, and use this to examine both traditional and novel limiter formulations. A consistent method of handling stretched meshes is developed, as is a new directional formulation in multiple dimensions for irregular grids. Results are presented demonstrating improved accuracy and convergence using a combination of model problems and complex three-dimensional examples.

  20. Real-data comparison of data mining methods in prediction of diabetes in iran.

    PubMed

    Tapak, Lily; Mahjub, Hossein; Hamidi, Omid; Poorolajal, Jalal

    2013-09-01

    Diabetes is one of the most common non-communicable diseases in developing countries. Early screening and diagnosis play an important role in effective prevention strategies. This study compared two traditional classification methods (logistic regression and Fisher linear discriminant analysis) and four machine-learning classifiers (neural networks, support vector machines, fuzzy c-mean, and random forests) to classify persons with and without diabetes. The data set used in this study included 6,500 subjects from the Iranian national non-communicable diseases risk factors surveillance obtained through a cross-sectional survey. The obtained sample was based on cluster sampling of the Iran population which was conducted in 2005-2009 to assess the prevalence of major non-communicable disease risk factors. Ten risk factors that are commonly associated with diabetes were selected to compare the performance of six classifiers in terms of sensitivity, specificity, total accuracy, and area under the receiver operating characteristic (ROC) curve criteria. Support vector machines showed the highest total accuracy (0.986) as well as area under the ROC (0.979). Also, this method showed high specificity (1.000) and sensitivity (0.820). All other methods produced total accuracy of more than 85%, but for all methods, the sensitivity values were very low (less than 0.350). The results of this study indicate that, in terms of sensitivity, specificity, and overall classification accuracy, the support vector machine model ranks first among all the classifiers tested in the prediction of diabetes. Therefore, this approach is a promising classifier for predicting diabetes, and it should be further investigated for the prediction of other diseases.

Top