Study on Fuzzy Adaptive Fractional Order PIλDμ Control for Maglev Guiding System
NASA Astrophysics Data System (ADS)
Hu, Qing; Hu, Yuwei
The mathematical model of the linear elevator maglev guiding system is analyzed in this paper. For the linear elevator needs strong stability and robustness to run, the integer order PID was expanded to the fractional order, in order to improve the steady state precision, rapidity and robustness of the system, enhance the accuracy of the parameter in fractional order PIλDμ controller, the fuzzy control is combined with the fractional order PIλDμ control, using the fuzzy logic achieves the parameters online adjustment. The simulations reveal that the system has faster response speed, higher tracking precision, and has stronger robustness to the disturbance.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-03-08
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-01-01
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062
Variational and robust density fitting of four-center two-electron integrals in local metrics
NASA Astrophysics Data System (ADS)
Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjærgaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Høst, Stinne; Salek, Paweł
2008-09-01
Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.
Variational and robust density fitting of four-center two-electron integrals in local metrics.
Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjaergaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Host, Stinne; Salek, Paweł
2008-09-14
Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.
Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F
2018-06-01
This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.
van Ijsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Reiber, J H C; Kaptein, B L
2011-10-13
Accurate in vivo measurements methods of wear in total knee arthroplasty are required for a timely detection of excessive wear and to assess new implant designs. Component separation measurements based on model-based Roentgen stereophotogrammetric analysis (RSA), in which 3-dimensional reconstruction methods are used, have shown promising results, yet the robustness of these measurements is unknown. In this study, the accuracy and robustness of this measurement for clinical usage was assessed. The validation experiments were conducted in an RSA setup with a phantom setup of a knee in a vertical orientation. 72 RSA images were created using different variables for knee orientations, two prosthesis types (fixed-bearing Duracon knee and fixed-bearing Triathlon knee) and accuracies of the reconstruction models. The measurement error was determined for absolute and relative measurements and the effect of knee positioning and true seperation distance was determined. The measurement method overestimated the separation distance with 0.1mm on average. The precision of the method was 0.10mm (2*SD) for the Duracon prosthesis and 0.20mm for the Triathlon prosthesis. A slight difference in error was found between the measurements with 0° and 10° anterior tilt. (difference=0.08mm, p=0.04). The accuracy of 0.1mm and precision of 0.2mm can be achieved for linear wear measurements based on model-based RSA, which is more than adequate for clinical applications. The measurement is robust in clinical settings. Although anterior tilt seems to influence the measurement, the size of this influence is low and clinically irrelevant. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J
2018-05-01
To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko
2018-04-01
Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.
Robust linear discriminant models to solve financial crisis in banking sectors
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni
2014-12-01
Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.
Gromski, Piotr S; Correa, Elon; Vaughan, Andrew A; Wedge, David C; Turner, Michael L; Goodacre, Royston
2014-11-01
Accurate detection of certain chemical vapours is important, as these may be diagnostic for the presence of weapons, drugs of misuse or disease. In order to achieve this, chemical sensors could be deployed remotely. However, the readout from such sensors is a multivariate pattern, and this needs to be interpreted robustly using powerful supervised learning methods. Therefore, in this study, we compared the classification accuracy of four pattern recognition algorithms which include linear discriminant analysis (LDA), partial least squares-discriminant analysis (PLS-DA), random forests (RF) and support vector machines (SVM) which employed four different kernels. For this purpose, we have used electronic nose (e-nose) sensor data (Wedge et al., Sensors Actuators B Chem 143:365-372, 2009). In order to allow direct comparison between our four different algorithms, we employed two model validation procedures based on either 10-fold cross-validation or bootstrapping. The results show that LDA (91.56% accuracy) and SVM with a polynomial kernel (91.66% accuracy) were very effective at analysing these e-nose data. These two models gave superior prediction accuracy, sensitivity and specificity in comparison to the other techniques employed. With respect to the e-nose sensor data studied here, our findings recommend that SVM with a polynomial kernel should be favoured as a classification method over the other statistical models that we assessed. SVM with non-linear kernels have the advantage that they can be used for classifying non-linear as well as linear mapping from analytical data space to multi-group classifications and would thus be a suitable algorithm for the analysis of most e-nose sensor data.
A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.
Yu, Fei; Lv, Chongyang; Dong, Qianhui
2016-03-18
Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter.
A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System
Yu, Fei; Lv, Chongyang; Dong, Qianhui
2016-01-01
Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter. PMID:26999153
Efficient and Robust Optimization for Building Energy Simulation
Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda
2016-01-01
Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell’s Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell’s method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell’s Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell’s Hybrid method presently used in HVACSIM+. PMID:27325907
Efficient and Robust Optimization for Building Energy Simulation.
Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda
2016-06-15
Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell's Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell's method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell's Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell's Hybrid method presently used in HVACSIM+.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Dong, E-mail: radon.han@gmail.com; Williamson, Jeffrey F.; Siebers, Jeffrey V.
2016-01-15
Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl{sub 2} aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracymore » of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on tissue type and proton energy range. The BVM is slightly more vulnerable to CT image intensity uncertainties than the tPFM models. Both the BVM and tPFM prediction accuracies were robust to uncertainties of tissue composition and independent of the choice of reference values. This reported accuracy does not include the impacts of I-value uncertainties and imaging artifacts and may not be achievable on current clinical CT scanners. Conclusions: The proton stopping power estimation accuracy of the proposed linear, separable BVM model is comparable to or better than that of the nonseparable tPFM models proposed by other groups. In contrast to the tPFM, the BVM does not require an iterative solving for effective atomic number and electron density at every voxel; this improves the computational efficiency of DECT imaging when iterative, model-based image reconstruction algorithms are used to minimize noise and systematic imaging artifacts of CT images.« less
Analysis of Slope Limiters on Irregular Grids
NASA Technical Reports Server (NTRS)
Berger, Marsha; Aftosmis, Michael J.
2005-01-01
This paper examines the behavior of flux and slope limiters on non-uniform grids in multiple dimensions. Many slope limiters in standard use do not preserve linear solutions on irregular grids impacting both accuracy and convergence. We rewrite some well-known limiters to highlight their underlying symmetry, and use this form to examine the proper - ties of both traditional and novel limiter formulations on non-uniform meshes. A consistent method of handling stretched meshes is developed which is both linearity preserving for arbitrary mesh stretchings and reduces to common limiters on uniform meshes. In multiple dimensions we analyze the monotonicity region of the gradient vector and show that the multidimensional limiting problem may be cast as the solution of a linear programming problem. For some special cases we present a new directional limiting formulation that preserves linear solutions in multiple dimensions on irregular grids. Computational results using model problems and complex three-dimensional examples are presented, demonstrating accuracy, monotonicity and robustness.
Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.
Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi
2013-01-01
The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.
An adaptive discontinuous Galerkin solver for aerodynamic flows
NASA Astrophysics Data System (ADS)
Burgess, Nicholas K.
This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.
Nasso, Sara; Goetze, Sandra; Martens, Lennart
2015-09-04
Selected reaction monitoring (SRM) MS is a highly selective and sensitive technique to quantify protein abundances in complex biological samples. To enhance the pace of SRM large studies, a validated, robust method to fully automate absolute quantification and to substitute for interactive evaluation would be valuable. To address this demand, we present Ariadne, a Matlab software. To quantify monitored targets, Ariadne exploits metadata imported from the transition lists, and targets can be filtered according to mProphet output. Signal processing and statistical learning approaches are combined to compute peptide quantifications. To robustly estimate absolute abundances, the external calibration curve method is applied, ensuring linearity over the measured dynamic range. Ariadne was benchmarked against mProphet and Skyline by comparing its quantification performance on three different dilution series, featuring either noisy/smooth traces without background or smooth traces with complex background. Results, evaluated as efficiency, linearity, accuracy, and precision of quantification, showed that Ariadne's performance is independent of data smoothness and complex background presence and that Ariadne outperforms mProphet on the noisier data set and improved 2-fold Skyline's accuracy and precision for the lowest abundant dilution with complex background. Remarkably, Ariadne could statistically distinguish from each other all different abundances, discriminating dilutions as low as 0.1 and 0.2 fmol. These results suggest that Ariadne offers reliable and automated analysis of large-scale SRM differential expression studies.
Castegnaro, Silvia; Dragone, Patrizia; Chieregato, Katia; Alghisi, Alberta; Rodeghiero, Francesco; Astori, Giuseppe
2016-04-01
Transfusion of blood components is potentially associated to the risk of cell-mediated adverse events and current guidelines require a reduction of residual white blood cells (rWBC) below 1 × 10(6) WBC/unit. The reference method to enumerate rare events is the flow cytometry (FCM). The ADAM-rWBC microscopic cell counter has been proposed as an alternative: it measures leukocytes after their staining with propidium iodide. We have tested the Adam-rWBC for the ability to enumerate rWBC in red blood cells and concentrates. We have validated the flow cytometry (FCM) for linearity, precision accuracy and robustness and then the ADAM-rWBC results have been compared with the FCM. Our data confirm the linearity, accuracy, precision and robustness of the FCM. The ADAM-rWBC has revealed an adequate precision and accuracy. Even if the Bland-Altman analysis of the paired data has indicated that the two systems are comparable, it should be noted that the rWBC values obtained by the ADAM-rWBC were significantly higher compared to FCM. In conclusion, the Adam-rWBC cell counter could represent an alternative where FCM technology expertise is not available, even if the risk that borderline products could be misclassified exists. Copyright © 2015 Elsevier Ltd. All rights reserved.
Naveen, P.; Lingaraju, H. B.; Prasad, K. Shyam
2017-01-01
Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica, is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica. RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography–mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica. SUMMARY The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica. The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica. Abbreviations Used: M. indica: Mangifera indica, RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification. PMID:28539748
Naveen, P; Lingaraju, H B; Prasad, K Shyam
2017-01-01
Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica , is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica . RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography-mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica . The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica . The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica . Abbreviations Used: M. indica : Mangifera indica , RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification.
Efficient Computation of Info-Gap Robustness for Finite Element Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.
2012-07-05
A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers anmore » alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.« less
Linear programming computational experience with onyx
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atrek, E.
1994-12-31
ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.
Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators.
Kim, Pil-Jong; Kim, Hong-Gee; Cho, Byeong-Hoon
2015-05-01
The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL). The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC) in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC.
Robust coordinated control of a dual-arm space robot
NASA Astrophysics Data System (ADS)
Shi, Lingling; Kayastha, Sharmila; Katupitiya, Jay
2017-09-01
Dual-arm space robots are more capable of implementing complex space tasks compared with single arm space robots. However, the dynamic coupling between the arms and the base will have a serious impact on the spacecraft attitude and the hand motion of each arm. Instead of considering one arm as the mission arm and the other as the balance arm, in this work two arms of the space robot perform as mission arms aimed at accomplishing secure capture of a floating target. The paper investigates coordinated control of the base's attitude and the arms' motion in the task space in the presence of system uncertainties. Two types of controllers, i.e. a Sliding Mode Controller (SMC) and a nonlinear Model Predictive Controller (MPC) are verified and compared with a conventional Computed-Torque Controller (CTC) through numerical simulations in terms of control accuracy and system robustness. Both controllers eliminate the need to linearly parameterize the dynamic equations. The MPC has been shown to achieve performance with higher accuracy than CTC and SMC in the absence of system uncertainties under the condition that they consume comparable energy. When the system uncertainties are included, SMC and CTC present advantageous robustness than MPC. Specifically, in a case where system inertia increases, SMC delivers higher accuracy than CTC and costs the least amount of energy.
New machine-learning algorithms for prediction of Parkinson's disease
NASA Astrophysics Data System (ADS)
Mandal, Indrajit; Sairam, N.
2014-03-01
This article presents an enhanced prediction accuracy of diagnosis of Parkinson's disease (PD) to prevent the delay and misdiagnosis of patients using the proposed robust inference system. New machine-learning methods are proposed and performance comparisons are based on specificity, sensitivity, accuracy and other measurable parameters. The robust methods of treating Parkinson's disease (PD) includes sparse multinomial logistic regression, rotation forest ensemble with support vector machines and principal components analysis, artificial neural networks, boosting methods. A new ensemble method comprising of the Bayesian network optimised by Tabu search algorithm as classifier and Haar wavelets as projection filter is used for relevant feature selection and ranking. The highest accuracy obtained by linear logistic regression and sparse multinomial logistic regression is 100% and sensitivity, specificity of 0.983 and 0.996, respectively. All the experiments are conducted over 95% and 99% confidence levels and establish the results with corrected t-tests. This work shows a high degree of advancement in software reliability and quality of the computer-aided diagnosis system and experimentally shows best results with supportive statistical inference.
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds
Poreba, Martyna; Goulette, François
2015-01-01
With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589
Extending the accuracy of the SNAP interatomic potential form
NASA Astrophysics Data System (ADS)
Wood, Mitchell A.; Thompson, Aidan P.
2018-06-01
The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functions in EAM. The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similar to artificial neural network potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting. The quality of this new potential form is measured through a robust cross-validation analysis.
NASA Technical Reports Server (NTRS)
West, M. E.
1992-01-01
A real-time estimation filter which reduces sensitivity to system variations and reduces the amount of preflight computation is developed for the instrument pointing subsystem (IPS). The IPS is a three-axis stabilized platform developed to point various astronomical observation instruments aboard the shuttle. Currently, the IPS utilizes a linearized Kalman filter (LKF), with premission defined gains, to compensate for system drifts and accumulated attitude errors. Since the a priori gains are generated for an expected system, variations result in a suboptimal estimation process. This report compares the performance of three real-time estimation filters with the current LKF implementation. An extended Kalman filter and a second-order Kalman filter are developed to account for the system nonlinearities, while a linear Kalman filter implementation assumes that the nonlinearities are negligible. The performance of each of the four estimation filters are compared with respect to accuracy, stability, settling time, robustness, and computational requirements. It is shown, that for the current IPS pointing requirements, the linear Kalman filter provides improved robustness over the LKF with less computational requirements than the two real-time nonlinear estimation filters.
Fully-Implicit Orthogonal Reconstructed Discontinuous Galerkin for Fluid Dynamics with Phase Change
Nourgaliev, R.; Luo, H.; Weston, B.; ...
2015-11-11
A new reconstructed Discontinuous Galerkin (rDG) method, based on orthogonal basis/test functions, is developed for fluid flows on unstructured meshes. Orthogonality of basis functions is essential for enabling robust and efficient fully-implicit Newton-Krylov based time integration. The method is designed for generic partial differential equations, including transient, hyperbolic, parabolic or elliptic operators, which are attributed to many multiphysics problems. We demonstrate the method’s capabilities for solving compressible fluid-solid systems (in the low Mach number limit), with phase change (melting/solidification), as motivated by applications in Additive Manufacturing (AM). We focus on the method’s accuracy (in both space and time), as wellmore » as robustness and solvability of the system of linear equations involved in the linearization steps of Newton-based methods. The performance of the developed method is investigated for highly-stiff problems with melting/solidification, emphasizing the advantages from tight coupling of mass, momentum and energy conservation equations, as well as orthogonality of basis functions, which leads to better conditioning of the underlying (approximate) Jacobian matrices, and rapid convergence of the Krylov-based linear solver.« less
Measures of model performance based on the log accuracy ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Measures of model performance based on the log accuracy ratio
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
2018-01-03
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.
Flassig, R J; Sundmacher, K
2012-12-01
Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo
2015-11-01
This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawai, Soshi, E-mail: kawai@cfd.mech.tohoku.ac.jp; Terashima, Hiroshi; Negishi, Hideyo
2015-11-01
This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture themore » steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.« less
Standardized UXO Technology Demonstration Site, Blind Grid Scoring Record No. 919
2008-07-01
provided by demonstrator) a. The core component of the electromagnetic (EM) AMOS metal detector is a linear multichannel sensor array consisting of a...Attainable accuracy of depth (z) +0.3 m h. Detection performance for ferrous and nonferrous metals : Will detect ammunition components 20-mm caliber...2-meter-wide transmitter coil and 16 receiver coils, mounted on a robust, all-terrain trailer (fig. 1). b. The AMOS detector unit consists of the
Avsec, Žiga; Cheng, Jun; Gagneur, Julien
2018-01-01
Abstract Motivation Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Results Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Availability and implementation Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at https://github.com/gagneurlab/Manuscript_Avsec_Bioinformatics_2017. Contact avsec@in.tum.de or gagneur@in.tum.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29155928
Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition
NASA Astrophysics Data System (ADS)
Ilbeigi, Shahab; Chelidze, David
2017-11-01
Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.
NASA Astrophysics Data System (ADS)
Mokhtar, Nurkhairany Amyra; Zubairi, Yong Zulina; Hussin, Abdul Ghapor
2017-05-01
Outlier detection has been used extensively in data analysis to detect anomalous observation in data and has important application in fraud detection and robust analysis. In this paper, we propose a method in detecting multiple outliers for circular variables in linear functional relationship model. Using the residual values of the Caires and Wyatt model, we applied the hierarchical clustering procedure. With the use of tree diagram, we illustrate the graphical approach of the detection of outlier. A simulation study is done to verify the accuracy of the proposed method. Also, an illustration to a real data set is given to show its practical applicability.
Matrix preconditioning: a robust operation for optical linear algebra processors.
Ghosh, A; Paparao, P
1987-07-15
Analog electrooptical processors are best suited for applications demanding high computational throughput with tolerance for inaccuracies. Matrix preconditioning is one such application. Matrix preconditioning is a preprocessing step for reducing the condition number of a matrix and is used extensively with gradient algorithms for increasing the rate of convergence and improving the accuracy of the solution. In this paper, we describe a simple parallel algorithm for matrix preconditioning, which can be implemented efficiently on a pipelined optical linear algebra processor. From the results of our numerical experiments we show that the efficacy of the preconditioning algorithm is affected very little by the errors of the optical system.
Hybrid active contour model for inhomogeneous image segmentation with background estimation
NASA Astrophysics Data System (ADS)
Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun
2018-03-01
This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.
Robust global identifiability theory using potentials--Application to compartmental models.
Wongvanich, N; Hann, C E; Sirisena, H R
2015-04-01
This paper presents a global practical identifiability theory for analyzing and identifying linear and nonlinear compartmental models. The compartmental system is prolonged onto the potential jet space to formulate a set of input-output equations that are integrals in terms of the measured data, which allows for robust identification of parameters without requiring any simulation of the model differential equations. Two classes of linear and non-linear compartmental models are considered. The theory is first applied to analyze the linear nitrous oxide (N2O) uptake model. The fitting accuracy of the identified models from differential jet space and potential jet space identifiability theories is compared with a realistic noise level of 3% which is derived from sensor noise data in the literature. The potential jet space approach gave a match that was well within the coefficient of variation. The differential jet space formulation was unstable and not suitable for parameter identification. The proposed theory is then applied to a nonlinear immunological model for mastitis in cows. In addition, the model formulation is extended to include an iterative method which allows initial conditions to be accurately identified. With up to 10% noise, the potential jet space theory predicts the normalized population concentration infected with pathogens, to within 9% of the true curve. Copyright © 2015 Elsevier Inc. All rights reserved.
Stabilization Approaches for Linear and Nonlinear Reduced Order Models
NASA Astrophysics Data System (ADS)
Rezaian, Elnaz; Wei, Mingjun
2017-11-01
It has been a major concern to establish reduced order models (ROMs) as reliable representatives of the dynamics inherent in high fidelity simulations, while fast computation is achieved. In practice it comes to stability and accuracy of ROMs. Given the inviscid nature of Euler equations it becomes more challenging to achieve stability, especially where moving discontinuities exist. Originally unstable linear and nonlinear ROMs are stabilized here by two approaches. First, a hybrid method is developed by integrating two different stabilization algorithms. At the same time, symmetry inner product is introduced in the generation of ROMs for its known robust behavior for compressible flows. Results have shown a notable improvement in computational efficiency and robustness compared to similar approaches. Second, a new stabilization algorithm is developed specifically for nonlinear ROMs. This method adopts Particle Swarm Optimization to enforce a bounded ROM response for minimum discrepancy between the high fidelity simulation and the ROM outputs. Promising results are obtained in its application on the nonlinear ROM of an inviscid fluid flow with discontinuities. Supported by ARL.
Optimal full motion video registration with rigorous error propagation
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn
2014-06-01
Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Cheung, Yam; Sawant, Amit
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-01-01
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Eric M.
2004-05-20
The YAP software library computes (1) electromagnetic modes, (2) electrostatic fields, (3) magnetostatic fields and (4) particle trajectories in 2d and 3d models. The code employs finite element methods on unstructured grids of tetrahedral, hexahedral, prism and pyramid elements, with linear through cubic element shapes and basis functions to provide high accuracy. The novel particle tracker is robust, accurate and efficient, even on unstructured grids with discontinuous fields. This software library is a component of the MICHELLE 3d finite element gun code.
Torres, Daiane Placido; Martins-Teixeira, Maristela Braga; Cadore, Solange; Queiroz, Helena Müller
2015-01-01
A method for the determination of total mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) has been validated following international foodstuff protocols in order to fulfill the Brazilian National Residue Control Plan. The experimental parameters have been previously studied and optimized according to specific legislation on validation and inorganic contaminants in foodstuff. Linearity, sensitivity, specificity, detection and quantification limits, precision (repeatability and within-laboratory reproducibility), robustness as well as accuracy of the method have been evaluated. Linearity of response was satisfactory for the two range concentrations available on the TDA AAS equipment, between approximately 25.0 and 200.0 μg kg(-1) (square regression) and 250.0 and 2000.0 μg kg(-1) (linear regression) of mercury. The residues for both ranges were homoscedastic and independent, with normal distribution. Correlation coefficients obtained for these ranges were higher than 0.995. Limits of quantification (LOQ) and of detection of the method (LDM), based on signal standard deviation (SD) for a low-in-mercury sample, were 3.0 and 1.0 μg kg(-1), respectively. Repeatability of the method was better than 4%. Within-laboratory reproducibility achieved a relative SD better than 6%. Robustness of the current method was evaluated and pointed sample mass as a significant factor. Accuracy (assessed as the analyte recovery) was calculated on basis of the repeatability, and ranged from 89% to 99%. The obtained results showed the suitability of the present method for direct mercury measurement in fresh fish and shrimp samples and the importance of monitoring the analysis conditions for food control purposes. Additionally, the competence of this method was recognized by accreditation under the standard ISO/IEC 17025.
Integrated vision-based GNC for autonomous rendezvous and capture around Mars
NASA Astrophysics Data System (ADS)
Strippoli, L.; Novelli, G.; Gil Fernandez, J.; Colmenarejo, P.; Le Peuvedic, C.; Lanza, P.; Ankersen, F.
2015-06-01
Integrated GNC (iGNC) is an activity aimed at designing, developing and validating the GNC for autonomously performing the rendezvous and capture phase of the Mars sample return mission as defined during the Mars sample return Orbiter (MSRO) ESA study. The validation cycle includes testing in an end-to-end simulator, in a real-time avionics-representative test bench and, finally, in a dynamic HW in the loop test bench for assessing the feasibility, performances and figure of merits of the baseline approach defined during the MSRO study, for both nominal and contingency scenarios. The on-board software (OBSW) is tailored to work with the sensors, actuators and orbits baseline proposed in MSRO. The whole rendezvous is based on optical navigation, aided by RF-Doppler during the search and first orbit determination of the orbiting sample. The simulated rendezvous phase includes also the non-linear orbit synchronization, based on a dedicated non-linear guidance algorithm robust to Mars ascent vehicle (MAV) injection accuracy or MAV failures resulting in elliptic target orbits. The search phase is very demanding for the image processing (IP) due to the very high visual magnitude of the target wrt. the stellar background, and the attitude GNC requires very high pointing stability accuracies to fulfil IP constraints. A trade-off of innovative, autonomous navigation filters indicates the unscented Kalman filter (UKF) as the approach that provides the best results in terms of robustness, response to non-linearities and performances compatibly with computational load. At short range, an optimized IP based on a convex hull algorithm has been developed in order to guarantee LoS and range measurements from hundreds of metres to capture.
Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P
2010-10-22
A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.
Wavefront Sensing for WFIRST with a Linear Optical Model
NASA Technical Reports Server (NTRS)
Jurling, Alden S.; Content, David A.
2012-01-01
In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.
Szerkus, Oliwia; Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bartosińska, Ewa; Bujak, Renata; Borsuk, Agnieszka; Bienert, Agnieszka; Bartkowska-Śniatkowska, Alicja; Warzybok, Justyna; Wiczling, Paweł; Nasal, Antoni; Kaliszan, Roman; Markuszewski, Michał Jan; Siluk, Danuta
2017-02-01
The purpose of this work was to develop and validate a rapid and robust LC-MS/MS method for the determination of dexmedetomidine (DEX) in plasma, suitable for analysis of a large number of samples. Systematic approach, Design of Experiments, was applied to optimize ESI source parameters and to evaluate method robustness, therefore, a rapid, stable and cost-effective assay was developed. The method was validated according to US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (5-2500 pg/ml), Results: Experimental design approach was applied for optimization of ESI source parameters and evaluation of method robustness. The method was validated according to the US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (R 2 > 0.98). The accuracies, intra- and interday precisions were less than 15%. The stability data confirmed reliable behavior of DEX under tested conditions. Application of Design of Experiments approach allowed for fast and efficient analytical method development and validation as well as for reduced usage of chemicals necessary for regular method optimization. The proposed technique was applied to determination of DEX pharmacokinetics in pediatric patients undergoing long-term sedation in the intensive care unit.
Tezaur, I. K.; Perego, M.; Salinger, A. G.; ...
2015-04-27
This paper describes a new parallel, scalable and robust finite element based solver for the first-order Stokes momentum balance equations for ice flow. The solver, known as Albany/FELIX, is constructed using the component-based approach to building application codes, in which mature, modular libraries developed as a part of the Trilinos project are combined using abstract interfaces and template-based generic programming, resulting in a final code with access to dozens of algorithmic and advanced analysis capabilities. Following an overview of the relevant partial differential equations and boundary conditions, the numerical methods chosen to discretize the ice flow equations are described, alongmore » with their implementation. The results of several verification studies of the model accuracy are presented using (1) new test cases for simplified two-dimensional (2-D) versions of the governing equations derived using the method of manufactured solutions, and (2) canonical ice sheet modeling benchmarks. Model accuracy and convergence with respect to mesh resolution are then studied on problems involving a realistic Greenland ice sheet geometry discretized using hexahedral and tetrahedral meshes. Also explored as a part of this study is the effect of vertical mesh resolution on the solution accuracy and solver performance. The robustness and scalability of our solver on these problems is demonstrated. Lastly, we show that good scalability can be achieved by preconditioning the iterative linear solver using a new algebraic multilevel preconditioner, constructed based on the idea of semi-coarsening.« less
Telephony-based voice pathology assessment using automated speech analysis.
Moran, Rosalyn J; Reilly, Richard B; de Chazal, Philip; Lacy, Peter D
2006-03-01
A system for remotely detecting vocal fold pathologies using telephone-quality speech is presented. The system uses a linear classifier, processing measurements of pitch perturbation, amplitude perturbation and harmonic-to-noise ratio derived from digitized speech recordings. Voice recordings from the Disordered Voice Database Model 4337 system were used to develop and validate the system. Results show that while a sustained phonation, recorded in a controlled environment, can be classified as normal or pathologic with accuracy of 89.1%, telephone-quality speech can be classified as normal or pathologic with an accuracy of 74.2%, using the same scheme. Amplitude perturbation features prove most robust for telephone-quality speech. The pathologic recordings were then subcategorized into four groups, comprising normal, neuromuscular pathologic, physical pathologic and mixed (neuromuscular with physical) pathologic. A separate classifier was developed for classifying the normal group from each pathologic subcategory. Results show that neuromuscular disorders could be detected remotely with an accuracy of 87%, physical abnormalities with an accuracy of 78% and mixed pathology voice with an accuracy of 61%. This study highlights the real possibility for remote detection and diagnosis of voice pathology.
Wang, Guangji; Wang, Qian; Rao, Tai; Shen, Boyu; Kang, Dian; Shao, Yuhao; Xiao, Jingcheng; Chen, Huimin; Liang, Yan
2016-06-15
Pidotimod, (R)-3-[(S)-(5-oxo-2-pyrrolidinyl) carbonyl]-thiazolidine-4-carboxylic acid, was frequently used to treat children with recurrent respiratory infections. Preclinical pharmacokinetics of pidotimod was still rarely reported to date. Herein, a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed and validated to determine pidotimod in rat plasma, tissue homogenate and Caco-2 cells. In this process, phenacetin was chosen as the internal standard due to its similarity in chromatographic and mass spectrographic characteristics with pidotimod. The plasma calibration curves were established within the concentration range of 0.01-10.00μg/mL, and similar linear curves were built using tissue homogenate and Caco-2 cells. The calibration curves for all biological samples showed good linearity (r>0.99) over the concentration ranges tested. The intra- and inter-day precision (RSD, %) values were below 15% and accuracy (RE, %) was ranged from -15% to 15% at all quality control levels. For plasma, tissue homogenate and Caco-2 cells, no obvious matrix effect was found, and the average recoveries were all above 75%. Thus, the method demonstrated excellent accuracy, precision and robustness for high throughput applications, and was then successfully applied to the studies of absorption in rat plasma, distribution in rat tissues and intracellular uptake characteristics in Caco-2 cells for pidotimod. Copyright © 2016 Elsevier B.V. All rights reserved.
Robust estimation for partially linear models with large-dimensional covariates
Zhu, LiPing; Li, RunZe; Cui, HengJian
2014-01-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087
Robust estimation for partially linear models with large-dimensional covariates.
Zhu, LiPing; Li, RunZe; Cui, HengJian
2013-10-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.
NASA Astrophysics Data System (ADS)
Chadha, R.; Bali, A.
2016-05-01
Rapid, sensitive, cost effective and reproducible stability-indicating derivative spectrophotometric methods have been developed for the estimation of dronedarone HCl employing peak-zero (P-0) and peak-peak (P-P) techniques, and their stability-indicating potential assessed in forced degraded solutions of the drug. The methods were validated with respect to linearity, accuracy, precision and robustness. Excellent linearity was observed in concentrations 2-40 μg/ml ( r 2 = 0.9986). LOD and LOQ values for the proposed methods ranged from 0.42-0.46 μg/ml and 1.21-1.27 μg/ml, respectively, and excellent recovery of the drug was obtained in the tablet samples (99.70 ± 0.84%).
Adikaram, K K L B; Hussein, M A; Effenberger, M; Becker, T
2015-01-01
Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio Rmax of amax - amin and Sn - amin*n and that of Rmin of amax - amin and amax*n - Sn are always equal to 2/n, where amax is the maximum element, amin is the minimum element and Sn is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, Rmax > 2/n and Rmin > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k1) and 2/n * (1 + k2), respectively, where k1 > k2 and 0 ≤ k1 ≤ n/2 - 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10-4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process.
NASA Astrophysics Data System (ADS)
Ramos, M. Rosário; Carolino, E.; Viegas, Carla; Viegas, Sandra
2016-06-01
Health effects associated with occupational exposure to particulate matter have been studied by several authors. In this study were selected six industries of five different areas: Cork company 1, Cork company 2, poultry, slaughterhouse for cattle, riding arena and production of animal feed. The measurements tool was a portable device for direct reading. This tool provides information on the particle number concentration for six different diameters, namely 0.3 µm, 0.5 µm, 1 µm, 2.5 µm, 5 µm and 10 µm. The focus on these features is because they might be more closely related with adverse health effects. The aim is to identify the particles that better discriminate the industries, with the ultimate goal of classifying industries regarding potential negative effects on workers' health. Several methods of discriminant analysis were applied to data of occupational exposure to particulate matter and compared with respect to classification accuracy. The selected methods were linear discriminant analyses (LDA); linear quadratic discriminant analysis (QDA), robust linear discriminant analysis with selected estimators (MLE (Maximum Likelihood Estimators), MVE (Minimum Volume Elipsoid), "t", MCD (Minimum Covariance Determinant), MCD-A, MCD-B), multinomial logistic regression and artificial neural networks (ANN). The predictive accuracy of the methods was accessed through a simulation study. ANN yielded the highest rate of classification accuracy in the data set under study. Results indicate that the particle number concentration of diameter size 0.5 µm is the parameter that better discriminates industries.
Fully-Implicit Reconstructed Discontinuous Galerkin Method for Stiff Multiphysics Problems
NASA Astrophysics Data System (ADS)
Nourgaliev, Robert
2015-11-01
A new reconstructed Discontinuous Galerkin (rDG) method, based on orthogonal basis/test functions, is developed for fluid flows on unstructured meshes. Orthogonality of basis functions is essential for enabling robust and efficient fully-implicit Newton-Krylov based time integration. The method is designed for generic partial differential equations, including transient, hyperbolic, parabolic or elliptic operators, which are attributed to many multiphysics problems. We demonstrate the method's capabilities for solving compressible fluid-solid systems (in the low Mach number limit), with phase change (melting/solidification), as motivated by applications in Additive Manufacturing. We focus on the method's accuracy (in both space and time), as well as robustness and solvability of the system of linear equations involved in the linearization steps of Newton-based methods. The performance of the developed method is investigated for highly-stiff problems with melting/solidification, emphasizing the advantages from tight coupling of mass, momentum and energy conservation equations, as well as orthogonality of basis functions, which leads to better conditioning of the underlying (approximate) Jacobian matrices, and rapid convergence of the Krylov-based linear solver. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and funded by the LDRD at LLNL under project tracking code 13-SI-002.
Influence of a high vacuum on the precise positioning using an ultrasonic linear motor.
Kim, Wan-Soo; Lee, Dong-Jin; Lee, Sun-Kyu
2011-01-01
This paper presents an investigation of the ultrasonic linear motor stage for use in a high vacuum environment. The slider table is driven by the hybrid bolt-clamped Langevin-type ultrasonic linear motor, which is excited with its different modes of natural frequencies in both lateral and longitudinal directions. In general, the friction behavior in a vacuum environment becomes different from that in an environment of atmospheric pressure and this difference significantly affects the performance of the ultrasonic linear motor. In this paper, to consistently provide stable and high power of output in a high vacuum, frequency matching was conducted. Moreover, to achieve the fine control performance in the vacuum environment, a modified nominal characteristic trajectory following control method was adopted. Finally, the stage was operated under high vacuum condition, and the operating performances were investigated compared with that of a conventional PI compensator. As a result, robustness of positioning was accomplished in a high vacuum condition with nanometer-level accuracy.
Molecular cancer classification using a meta-sample-based regularized robust coding method.
Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen
2014-01-01
Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
The holistic analysis of gamma-ray spectra in instrumental neutron activation analysis
NASA Astrophysics Data System (ADS)
Blaauw, Menno
1994-12-01
A method for the interpretation of γ-ray spectra as obtained in INAA using linear least squares techniques is described. Results obtained using this technique and the traditional method previously in use at IRI are compared. It is concluded that the method presented performs better with respect to the number of detected elements, the resolution of interferences and the estimation of the accuracies of the reported element concentrations. It is also concluded that the technique is robust enough to obviate the deconvolution of multiplets.
Field by field hybrid upwind splitting methods
NASA Technical Reports Server (NTRS)
Coquel, Frederic; Liou, Meng-Sing
1993-01-01
A new and general approach to upwind splitting is presented. The design principle combines the robustness of flux vector splitting schemes in the capture of nonlinear waves and the accuracy of some flux difference splitting schemes in the resolution of linear waves. The new schemes are derived following a general hybridization technique performed directly at the basic level of the field by field decomposition involved in FDS methods. The scheme does not use a spatial switch to be tuned up according to the local smoothness of the approximate solution.
Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1997-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.
SNR-adaptive stream weighting for audio-MES ASR.
Lee, Ki-Seung
2008-08-01
Myoelectric signals (MESs) from the speaker's mouth region have been successfully shown to improve the noise robustness of automatic speech recognizers (ASRs), thus promising to extend their usability in implementing noise-robust ASR. In the recognition system presented herein, extracted audio and facial MES features were integrated by a decision fusion method, where the likelihood score of the audio-MES observation vector was given by a linear combination of class-conditional observation log-likelihoods of two classifiers, using appropriate weights. We developed a weighting process adaptive to SNRs. The main objective of the paper involves determining the optimal SNR classification boundaries and constructing a set of optimum stream weights for each SNR class. These two parameters were determined by a method based on a maximum mutual information criterion. Acoustic and facial MES data were collected from five subjects, using a 60-word vocabulary. Four types of acoustic noise including babble, car, aircraft, and white noise were acoustically added to clean speech signals with SNR ranging from -14 to 31 dB. The classification accuracy of the audio ASR was as low as 25.5%. Whereas, the classification accuracy of the MES ASR was 85.2%. The classification accuracy could be further improved by employing the proposed audio-MES weighting method, which was as high as 89.4% in the case of babble noise. A similar result was also found for the other types of noise.
Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.
2015-01-01
We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700
Barth, Aline Bergesch; de Oliveira, Gabriela Bolfe; Malesuik, Marcelo Donadel; Paim, Clésio Soldatelli; Volpato, Nadia Maria
2011-08-01
A stability-indicating liquid chromatography method for the determination of the antifungal agent butenafine hydrochloride (BTF) in a cream was developed and validated using the Plackett-Burman experimental design for robustness evaluation. Also, the drug photodegradation kinetics was determined. The analytical column was operated with acetonitrile, methanol and a solution of triethylamine 0.3% adjusted to pH 4.0 (6:3:1) at a flow rate of 1 mL/min and detection at 283 nm. BTF extraction from the cream was done with n-butyl alcohol and methanol in ultrasonic bath. The performed degradation conditions were: acid and basic media with HCl 1M and NaOH 1M, respectively, oxidation with H(2)O(2) 10%, and the exposure to UV-C light. No interference in the BTF elution was verified. Linearity was assessed (r(2) = 0.9999) and ANOVA showed non-significative linearity deviation (p > 0.05). Adequate results were obtained for repeatability, intra-day precision, and accuracy. Critical factors were selected to examine the method robustness with the two-level Plackett-Burman experimental design and no significant factors were detected (p > 0.05). The BTF photodegradation kinetics was determined for the standard and for the cream, both in methanolic solution, under UV light at 254 nm. The degradation process can be described by first-order kinetics in both cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weston, Brian T.
This dissertation focuses on the development of a fully-implicit, high-order compressible ow solver with phase change. The work is motivated by laser-induced phase change applications, particularly by the need to develop large-scale multi-physics simulations of the selective laser melting (SLM) process in metal additive manufacturing (3D printing). Simulations of the SLM process require precise tracking of multi-material solid-liquid-gas interfaces, due to laser-induced melting/ solidi cation and evaporation/condensation of metal powder in an ambient gas. These rapid density variations and phase change processes tightly couple the governing equations, requiring a fully compressible framework to robustly capture the rapid density variations ofmore » the ambient gas and the melting/evaporation of the metal powder. For non-isothermal phase change, the velocity is gradually suppressed through the mushy region by a variable viscosity and Darcy source term model. The governing equations are discretized up to 4th-order accuracy with our reconstructed Discontinuous Galerkin spatial discretization scheme and up to 5th-order accuracy with L-stable fully implicit time discretization schemes (BDF2 and ESDIRK3-5). The resulting set of non-linear equations is solved using a robust Newton-Krylov method, with the Jacobian-free version of the GMRES solver for linear iterations. Due to the sti nes associated with the acoustic waves and thermal and viscous/material strength e ects, preconditioning the GMRES solver is essential. A robust and scalable approximate block factorization preconditioner was developed, which utilizes the velocity-pressure (vP) and velocity-temperature (vT) Schur complement systems. This multigrid block reduction preconditioning technique converges for high CFL/Fourier numbers and exhibits excellent parallel and algorithmic scalability on classic benchmark problems in uid dynamics (lid-driven cavity ow and natural convection heat transfer) as well as for laser-induced phase change problems in 2D and 3D.« less
UV Spectrophotometric Method for Estimation of Polypeptide-K in Bulk and Tablet Dosage Forms
NASA Astrophysics Data System (ADS)
Kaur, P.; Singh, S. Kumar; Gulati, M.; Vaidya, Y.
2016-01-01
An analytical method for estimation of polypeptide-k using UV spectrophotometry has been developed and validated for bulk as well as tablet dosage form. The developed method was validated for linearity, precision, accuracy, specificity, robustness, detection, and quantitation limits. The method has shown good linearity over the range from 100.0 to 300.0 μg/ml with a correlation coefficient of 0.9943. The percentage recovery of 99.88% showed that the method was highly accurate. The precision demonstrated relative standard deviation of less than 2.0%. The LOD and LOQ of the method were found to be 4.4 and 13.33, respectively. The study established that the proposed method is reliable, specific, reproducible, and cost-effective for the determination of polypeptide-k.
An efficient flexible-order model for 3D nonlinear water waves
NASA Astrophysics Data System (ADS)
Engsig-Karup, A. P.; Bingham, H. B.; Lindberg, O.
2009-04-01
The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal scaling of the solution effort multigrid is employed to precondition a GMRES iterative solution of the discretized Laplace problem. A robust multigrid method based on Gauss-Seidel smoothing is found to require special treatment of the boundary conditions along solid boundaries, and in particular on the sea bottom. A new discretization scheme using one layer of grid points outside the fluid domain is presented and shown to provide convergent solutions over the full physical and discrete parameter space of interest. Linear analysis of the fundamental properties of the scheme with respect to accuracy, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental measurements and other calculations from the literature.
Prado, A H; Borges, M C; Eloy, J O; Peccinini, R G; Chorilli, M
2017-10-01
Cutaneous penetration is a critical factor in the use of sunscreen, as the compounds should not reach systemic circulation in order to avoid the induction of toxicity. The evaluation of the skin penetration and permeation of the UVB filter octyl methoxycinnamate (OMC) is essential for the development of a successful sunscreen formulation. Liquid-crystalline systems are innovative and potential carriers of OMC, which possess several advantages, including controlled release and protection of the filter from degradation. In this study, a new and effective method was developed using ultra-high performance liquid chromatography (UPLC) with ultraviolet detection (UV) for the quantitative analysis of penetration of OMC-loaded liquid crystalline systems into the skin. The following parameters were assessed in the method: selectivity, linearity, precision, accuracy, robustness, limit of detection (LOD), and limit of quantification (LOQ). The analytical curve was linear in the range from 0.25 to 250 μg.m-1, precise, with a standard deviation of 0.05-1.24%, with an accuracy in the range from 96.72 to 105.52%, and robust, with adequate values for the LOD and LOQ of 0.1 and 0.25 μg.mL -1, respectively. The method was successfully used to determine the in vitro skin permeation of OMC-loaded liquid crystalline systems. The results of the in vitro tests on Franz cells showed low cutaneous permeation and high retention of the OMC, particularly in the stratum corneum, owing to its high lipophilicity, which is desirable for a sunscreen formulation.
Jenke, Dennis; Sadain, Salma; Nunez, Karen; Byrne, Frances
2007-01-01
The performance of an ion chromatographic method for measuring citrate and phosphate in pharmaceutical solutions is evaluated. Performance characteristics examined include accuracy, precision, specificity, response linearity, robustness, and the ability to meet system suitability criteria. In general, the method is found to be robust within reasonable deviations from its specified operating conditions. Analytical accuracy is typically 100 +/- 3%, and short-term precision is not more than 1.5% relative standard deviation. The instrument response is linear over a range of 50% to 150% of the standard preparation target concentrations (12 mg/L for phosphate and 20 mg/L for citrate), and the results obtained using a single-point standard versus a calibration curve are essentially equivalent. A small analytical bias is observed and ascribed to the relative purity of the differing salts, used as raw materials in tested finished products and as reference standards in the analytical method. The assay is specific in that no phosphate or citrate peaks are observed in a variety of method-related solutions and matrix blanks (with and without autoclaving). The assay with manual preparation of the eluents is sensitive to the composition of the eluent in the sense that the eluent must be effectively degassed and protected from CO(2) ingress during use. In order for the assay to perform effectively, extensive system equilibration and conditioning is required. However, a properly conditioned and equilibrated system can be used to test a number of samples via chromatographic runs that include many (> 50) injections.
A Novel Continuous Blood Pressure Estimation Approach Based on Data Mining Techniques.
Miao, Fen; Fu, Nan; Zhang, Yuan-Ting; Ding, Xiao-Rong; Hong, Xi; He, Qingyun; Li, Ye
2017-11-01
Continuous blood pressure (BP) estimation using pulse transit time (PTT) is a promising method for unobtrusive BP measurement. However, the accuracy of this approach must be improved for it to be viable for a wide range of applications. This study proposes a novel continuous BP estimation approach that combines data mining techniques with a traditional mechanism-driven model. First, 14 features derived from simultaneous electrocardiogram and photoplethysmogram signals were extracted for beat-to-beat BP estimation. A genetic algorithm-based feature selection method was then used to select BP indicators for each subject. Multivariate linear regression and support vector regression were employed to develop the BP model. The accuracy and robustness of the proposed approach were validated for static, dynamic, and follow-up performance. Experimental results based on 73 subjects showed that the proposed approach exhibited excellent accuracy in static BP estimation, with a correlation coefficient and mean error of 0.852 and -0.001 ± 3.102 mmHg for systolic BP, and 0.790 and -0.004 ± 2.199 mmHg for diastolic BP. Similar performance was observed for dynamic BP estimation. The robustness results indicated that the estimation accuracy was lower by a certain degree one day after model construction but was relatively stable from one day to six months after construction. The proposed approach is superior to the state-of-the-art PTT-based model for an approximately 2-mmHg reduction in the standard derivation at different time intervals, thus providing potentially novel insights for cuffless BP estimation.
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
Multi-wavelength approach towards on-product overlay accuracy and robustness
NASA Astrophysics Data System (ADS)
Bhattacharyya, Kaustuve; Noot, Marc; Chang, Hammer; Liao, Sax; Chang, Ken; Gosali, Benny; Su, Eason; Wang, Cathy; den Boef, Arie; Fouquet, Christophe; Huang, Guo-Tsai; Chen, Kai-Hsiung; Cheng, Kevin; Lin, John
2018-03-01
Success of diffraction-based overlay (DBO) technique1,4,5 in the industry is not just for its good precision and low toolinduced shift, but also for the measurement accuracy2 and robustness that DBO can provide. Significant efforts are put in to capitalize on the potential that DBO has to address measurement accuracy and robustness. Introduction of many measurement wavelength choices (continuous wavelength) in DBO is one of the key new capabilities in this area. Along with the continuous choice of wavelengths, the algorithms (fueled by swing-curve physics) on how to use these wavelengths are of high importance for a robust recipe setup that can avoid the impact from process stack variations (symmetric as well as asymmetric). All these are discussed. Moreover, another aspect of boosting measurement accuracy and robustness is discussed that deploys the capability to combine overlay measurement data from multiple wavelength measurements. The goal is to provide a method to make overlay measurements immune from process stack variations and also to report health KPIs for every measurement. By combining measurements from multiple wavelengths, a final overlay measurement is generated. The results show a significant benefit in accuracy and robustness against process stack variation. These results are supported by both measurement data as well as simulation from many product stacks.
Quasi-eccentricity error modeling and compensation in vision metrology
NASA Astrophysics Data System (ADS)
Shen, Yijun; Zhang, Xu; Cheng, Wei; Zhu, Limin
2018-04-01
Circular targets are commonly used in vision applications for its detection accuracy and robustness. The eccentricity error of the circular target caused by perspective projection is one of the main factors of measurement error which needs to be compensated in high-accuracy measurement. In this study, the impact of the lens distortion on the eccentricity error is comprehensively investigated. The traditional eccentricity error turns to a quasi-eccentricity error in the non-linear camera model. The quasi-eccentricity error model is established by comparing the quasi-center of the distorted ellipse with the true projection of the object circle center. Then, an eccentricity error compensation framework is proposed which compensates the error by iteratively refining the image point to the true projection of the circle center. Both simulation and real experiment confirm the effectiveness of the proposed method in several vision applications.
NASA Astrophysics Data System (ADS)
Zafar, I.; Edirisinghe, E. A.; Acar, S.; Bez, H. E.
2007-02-01
Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic License Plate Recognition (ALPR) systems. Several car MMR systems have been proposed in literature. However these approaches are based on feature detection algorithms that can perform sub-optimally under adverse lighting and/or occlusion conditions. In this paper we propose a real time, appearance based, car MMR approach using Two Dimensional Linear Discriminant Analysis that is capable of addressing this limitation. We provide experimental results to analyse the proposed algorithm's robustness under varying illumination and occlusions conditions. We have shown that the best performance with the proposed 2D-LDA based car MMR approach is obtained when the eigenvectors of lower significance are ignored. For the given database of 200 car images of 25 different make-model classifications, a best accuracy of 91% was obtained with the 2D-LDA approach. We use a direct Principle Component Analysis (PCA) based approach as a benchmark to compare and contrast the performance of the proposed 2D-LDA approach to car MMR. We conclude that in general the 2D-LDA based algorithm supersedes the performance of the PCA based approach.
Characterization of atherosclerotic plaques by cross-polarization optical coherence tomography
NASA Astrophysics Data System (ADS)
Gubarkova, Ekaterina V.; Dudenkova, Varvara V.; Feldchtein, Felix I.; Timofeeva, Lidia B.; Kiseleva, Elena B.; Kuznetsov, Sergei S.; Moiseev, Alexander A.; Gelikonov, Gregory V.; Vitkin, Alex I.; Gladkova, Natalia D.
2016-02-01
We combined cross-polarization optical coherence tomography (CP OCT) and non-linear microscopy based on second harmonic generation (SHG) and two-photon-excited fluorescence (2PEF) to assess collagen and elastin fibers in the development of the atherosclerotic plaque (AP). The study shows potential of CP OCT for the assessment of collagen and elastin fibers condition in atherosclerotic arteries. Specifically, the additional information afforded by CP OCT, related to birefringence and cross-scattering properties of arterial tissues, may improve the robustness and accuracy of assessment about the microstructure and composition of the plaque for different stages of atherosclerosis.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H
2011-04-01
A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
Sokoliess, Torsten; Köller, Gerhard
2005-06-01
A chiral capillary electrophoresis system allowing the determination of the enantiomeric purity of an investigational new drug was developed using a generic method development approach for basic analytes. The method was optimized in terms of type and concentration of both cyclodextrin (CD) and electrolyte, buffer pH, temperature, voltage, and rinsing procedure. Optimal chiral separation of the analyte was obtained using an electrolyte with 2.5% carboxymethyl-beta-CD in 25 mM NaH2PO4 (pH 4.0). Interchanging the inlet and outlet vials after each run improved the method's precision. To assure the method's suitability for the control of enantiomeric impurities in pharmaceutical quality control, its specificity, linearity, precision, accuracy, and robustness were validated according to the requirements of the International Conference on Harmonization. The usefulness of our generic method development approach for the validation of robustness was demonstrated.
Nikam, P. H.; Kareparamban, J. A.; Jadhav, A. P.; Kadam, V. J.
2013-01-01
Ursolic acid, a pentacyclic triterpenoid possess a wide range of pharmacological activities. It shows hypoglycemic, antiandrogenic, antibacterial, antiinflammatory, antioxidant, diuretic and cynogenic activity. It is commonly present in plants especially coating of leaves and fruits, such as apple fruit, vinca leaves, rosemary leaves, and eucalyptus leaves. A simple high-performance thin layer chromatographic method has been developed for the quantification of ursolic acid from apple peel (Malus domestica). The samples dissolved in methanol and linear ascending development was carried out in twin trough glass chamber. The mobile phase was selected as toluene:ethyl acetate:glacial acetic acid (70:30:2). The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.9982 in the concentration range 0.2-7 μg/spot with respect to peak area. According to the ICH guidelines the method was validated for linearity, accuracy, precision, and robustness. Statistical analysis of the data showed that the method is reproducible and selective for the estimation of ursolic acid. PMID:24302805
Control method for physical systems and devices
Guckenheimer, John
1997-01-01
A control method for stabilizing systems or devices that are outside the control domain of a linear controller is provided. When applied to nonlinear systems, the effectiveness of this method depends upon the size of the domain of stability that is produced for the stabilized equilibrium. If this domain is small compared to the accuracy of measurements or the size of disturbances within the system, then the linear controller is likely to fail within a short period. Failure of the system or device can be catastrophic: the system or device can wander far from the desired equilibrium. The method of the invention presents a general procedure to recapture the stability of a linear controller, when the trajectory of a system or device leaves its region of stability. By using a hybrid strategy based upon discrete switching events within the state space of the system or device, the system or device will return from a much larger domain to the region of stability utilized by the linear controller. The control procedure is robust and remains effective under large classes of perturbations of a given underlying system or device.
Efficient and robust model-to-image alignment using 3D scale-invariant features.
Toews, Matthew; Wells, William M
2013-04-01
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin
2017-12-01
Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.
A closed-form solution to tensor voting: theory and applications.
Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard
2012-08-01
We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.
Efficient and Robust Model-to-Image Alignment using 3D Scale-Invariant Features
Toews, Matthew; Wells, William M.
2013-01-01
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a-posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. PMID:23265799
Macrocell path loss prediction using artificial intelligence techniques
NASA Astrophysics Data System (ADS)
Usman, Abraham U.; Okereke, Okpo U.; Omizegba, Elijah E.
2014-04-01
The prediction of propagation loss is a practical non-linear function approximation problem which linear regression or auto-regression models are limited in their ability to handle. However, some computational Intelligence techniques such as artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFISs) have been shown to have great ability to handle non-linear function approximation and prediction problems. In this study, the multiple layer perceptron neural network (MLP-NN), radial basis function neural network (RBF-NN) and an ANFIS network were trained using actual signal strength measurement taken at certain suburban areas of Bauchi metropolis, Nigeria. The trained networks were then used to predict propagation losses at the stated areas under differing conditions. The predictions were compared with the prediction accuracy of the popular Hata model. It was observed that ANFIS model gave a better fit in all cases having higher R2 values in each case and on average is more robust than MLP and RBF models as it generalises better to a different data.
Genetic Programming Transforms in Linear Regression Situations
NASA Astrophysics Data System (ADS)
Castillo, Flor; Kordon, Arthur; Villa, Carlos
The chapter summarizes the use of Genetic Programming (GP) inMultiple Linear Regression (MLR) to address multicollinearity and Lack of Fit (LOF). The basis of the proposed method is applying appropriate input transforms (model respecification) that deal with these issues while preserving the information content of the original variables. The transforms are selected from symbolic regression models with optimal trade-off between accuracy of prediction and expressional complexity, generated by multiobjective Pareto-front GP. The chapter includes a comparative study of the GP-generated transforms with Ridge Regression, a variant of ordinary Multiple Linear Regression, which has been a useful and commonly employed approach for reducing multicollinearity. The advantages of GP-generated model respecification are clearly defined and demonstrated. Some recommendations for transforms selection are given as well. The application benefits of the proposed approach are illustrated with a real industrial application in one of the broadest empirical modeling areas in manufacturing - robust inferential sensors. The chapter contributes to increasing the awareness of the potential of GP in statistical model building by MLR.
Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures
NASA Astrophysics Data System (ADS)
Li, Quanbao; Wei, Fajie; Zhou, Shenghan
2017-05-01
The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.
Factors affecting GEBV accuracy with single-step Bayesian models.
Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng
2018-01-01
A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.
Low-Rank Linear Dynamical Systems for Motor Imagery EEG.
Zhang, Wenchang; Sun, Fuchun; Tan, Chuanqi; Liu, Shaobo
2016-01-01
The common spatial pattern (CSP) and other spatiospectral feature extraction methods have become the most effective and successful approaches to solve the problem of motor imagery electroencephalography (MI-EEG) pattern recognition from multichannel neural activity in recent years. However, these methods need a lot of preprocessing and postprocessing such as filtering, demean, and spatiospectral feature fusion, which influence the classification accuracy easily. In this paper, we utilize linear dynamical systems (LDSs) for EEG signals feature extraction and classification. LDSs model has lots of advantages such as simultaneous spatial and temporal feature matrix generation, free of preprocessing or postprocessing, and low cost. Furthermore, a low-rank matrix decomposition approach is introduced to get rid of noise and resting state component in order to improve the robustness of the system. Then, we propose a low-rank LDSs algorithm to decompose feature subspace of LDSs on finite Grassmannian and obtain a better performance. Extensive experiments are carried out on public dataset from "BCI Competition III Dataset IVa" and "BCI Competition IV Database 2a." The results show that our proposed three methods yield higher accuracies compared with prevailing approaches such as CSP and CSSP.
Dołowy, Małgorzata; Kulpińska-Kucia, Katarzyna; Pyka, Alina
2014-01-01
A new specific, precise, accurate, and robust TLC-densitometry has been developed for the simultaneous determination of hydrocortisone acetate and lidocaine hydrochloride in combined pharmaceutical formulation. The chromatographic analysis was carried out using a mobile phase consisting of chloroform + acetone + ammonia (25%) in volume composition 8 : 2 : 0.1 and silica gel 60F254 plates. Densitometric detection was performed in UV at wavelengths 200 nm and 250 nm, respectively, for lidocaine hydrochloride and hydrocortisone acetate. The validation of the proposed method was performed in terms of specificity, linearity, limit of detection (LOD), limit of quantification (LOQ), precision, accuracy, and robustness. The applied TLC procedure is linear in hydrocortisone acetate concentration range of 3.75 ÷ 12.50 μg·spot−1, and from 1.00 ÷ 2.50 μg·spot−1 for lidocaine hydrochloride. The developed method was found to be accurate (the value of the coefficient of variation CV [%] is less than 3%), precise (CV [%] is less than 2%), specific, and robust. LOQ of hydrocortisone acetate is 0.198 μg·spot−1 and LOD is 0.066 μg·spot−1. LOQ and LOD values for lidocaine hydrochloride are 0.270 and 0.090 μg·spot−1, respectively. The assay value of both bioactive substances is consistent with the limits recommended by Pharmacopoeia. PMID:24526880
Dołowy, Małgorzata; Kulpińska-Kucia, Katarzyna; Pyka, Alina
2014-01-01
A new specific, precise, accurate, and robust TLC-densitometry has been developed for the simultaneous determination of hydrocortisone acetate and lidocaine hydrochloride in combined pharmaceutical formulation. The chromatographic analysis was carried out using a mobile phase consisting of chloroform+acetone+ammonia (25%) in volume composition 8:2:0.1 and silica gel 60F254 plates. Densitometric detection was performed in UV at wavelengths 200 nm and 250 nm, respectively, for lidocaine hydrochloride and hydrocortisone acetate. The validation of the proposed method was performed in terms of specificity, linearity, limit of detection (LOD), limit of quantification (LOQ), precision, accuracy, and robustness. The applied TLC procedure is linear in hydrocortisone acetate concentration range of 3.75÷12.50 μg·spot(-1), and from 1.00÷2.50 μg·spot(-1) for lidocaine hydrochloride. The developed method was found to be accurate (the value of the coefficient of variation CV [%] is less than 3%), precise (CV [%] is less than 2%), specific, and robust. LOQ of hydrocortisone acetate is 0.198 μg·spot(-1) and LOD is 0.066 μg·spot(-1). LOQ and LOD values for lidocaine hydrochloride are 0.270 and 0.090 μg·spot(-1), respectively. The assay value of both bioactive substances is consistent with the limits recommended by Pharmacopoeia.
Robust stochastic optimization for reservoir operation
NASA Astrophysics Data System (ADS)
Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin
2015-01-01
Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.
Kalariya, Pradipbhai D; Kumar Talluri, Murali V N; Gaitonde, Vinay D; Devrukhakar, Prashant S; Srinivas, Ragampeta
2014-08-01
The present work describes the systematic development of a robust, precise, and rapid reversed-phase liquid chromatography method for the simultaneous determination of eprosartan mesylate and its six impurities using quality-by-design principles. The method was developed in two phases, screening and optimization. During the screening phase, the most suitable stationary phase, organic modifier, and pH were identified. The optimization was performed for secondary influential parameters--column temperature, gradient time, and flow rate using eight experiments--to examine multifactorial effects of parameters on the critical resolution and generated design space representing the robust region. A verification experiment was performed within the working design space and the model was found to be accurate. This study also describes other operating features of the column packed with superficially porous particles that allow very fast separations at pressures available in most liquid chromatography instruments. Successful chromatographic separation was achieved in less than 7 min using a fused-core C18 (100 mm × 2.1 mm, 2.6 μm) column with linear gradient elution of 10 mM ammonium formate (pH 3.0) and acetonitrile as the mobile phase. The method was validated for specificity, linearity, accuracy, precision, and robustness in compliance with the International Conference on Harmonization Q2 (R1) guidelines. The impurities were identified by liquid chromatography with mass spectrometry. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ChariDingari, Narahara; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P.; Kumar, G. Manoj
2012-01-01
Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real world applications, e.g. quality assurance and process monitoring. Specifically, variability in sample, system and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a non-linear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), due to its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data – highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples as well as in related areas of forensic and biological sample analysis. PMID:22292496
Cai, Bin; Dolly, Steven; Kamal, Gregory; Yaddanapudi, Sridhar; Sun, Baozhou; Goddu, S Murty; Mutic, Sasa; Li, Hua
2018-04-28
To investigate the feasibility of using kV flat panel detector on linac for consistency evaluations of kV X-ray generator performance. An in-house designed aluminum (Al) array phantom with six 9×9 cm 2 square regions having various thickness was proposed and used in this study. Through XML script-driven image acquisition, kV images with various acquisition settings were obtained using the kV flat panel detector. Utilizing pre-established baseline curves, the consistency of X-ray tube output characteristics including tube voltage accuracy, exposure accuracy and exposure linearity were assessed through image quality assessment metrics including ROI mean intensity, ROI standard deviation (SD) and noise power spectrums (NPS). The robustness of this method was tested on two linacs for a three-month period. With the proposed method, tube voltage accuracy can be verified through conscience check with a 2% tolerance and 2 kVp intervals for forty different kVp settings. The exposure accuracy can be tested with a 4% consistency tolerance for three mAs settings over forty kVp settings. The exposure linearity tested with three mAs settings achieved a coefficient of variation (CV) of 0.1. We proposed a novel approach that uses the kV flat panel detector available on linac for X-ray generator test. This approach eliminates the inefficiencies and variability associated with using third party QA detectors while enabling an automated process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
A prior feature SVM – MRF based method for mouse brain segmentation
Wu, Teresa; Bae, Min Hyeok; Zhang, Min; Pan, Rong; Badea, Alexandra
2012-01-01
We introduce an automated method, called prior feature Support Vector Machine- Markov Random Field (pSVMRF), to segment three-dimensional mouse brain Magnetic Resonance Microscopy (MRM) images. Our earlier work, extended MRF (eMRF) integrated Support Vector Machine (SVM) and Markov Random Field (MRF) approaches, leading to improved segmentation accuracy; however, the computation of eMRF is very expensive, which may limit its performance on segmentation and robustness. In this study pSVMRF reduces training and testing time for SVM, while boosting segmentation performance. Unlike the eMRF approach, where MR intensity information and location priors are linearly combined, pSVMRF combines this information in a nonlinear fashion, and enhances the discriminative ability of the algorithm. We validate the proposed method using MR imaging of unstained and actively stained mouse brain specimens, and compare segmentation accuracy with two existing methods: eMRF and MRF. C57BL/6 mice are used for training and testing, using cross validation. For formalin fixed C57BL/6 specimens, pSVMRF outperforms both eMRF and MRF. The segmentation accuracy for C57BL/6 brains, stained or not, was similar for larger structures like hippocampus and caudate putamen, (~87%), but increased substantially for smaller regions like susbtantia nigra (from 78.36% to 91.55%), and anterior commissure (from ~50% to ~80%). To test segmentation robustness against increased anatomical variability we add two strains, BXD29 and a transgenic mouse model of Alzheimer’s Disease. Segmentation accuracy for new strains is 80% for hippocampus, and caudate putamen, indicating that pSVMRF is a promising approach for phenotyping mouse models of human brain disorders. PMID:21988893
A prior feature SVM-MRF based method for mouse brain segmentation.
Wu, Teresa; Bae, Min Hyeok; Zhang, Min; Pan, Rong; Badea, Alexandra
2012-02-01
We introduce an automated method, called prior feature Support Vector Machine-Markov Random Field (pSVMRF), to segment three-dimensional mouse brain Magnetic Resonance Microscopy (MRM) images. Our earlier work, extended MRF (eMRF) integrated Support Vector Machine (SVM) and Markov Random Field (MRF) approaches, leading to improved segmentation accuracy; however, the computation of eMRF is very expensive, which may limit its performance on segmentation and robustness. In this study pSVMRF reduces training and testing time for SVM, while boosting segmentation performance. Unlike the eMRF approach, where MR intensity information and location priors are linearly combined, pSVMRF combines this information in a nonlinear fashion, and enhances the discriminative ability of the algorithm. We validate the proposed method using MR imaging of unstained and actively stained mouse brain specimens, and compare segmentation accuracy with two existing methods: eMRF and MRF. C57BL/6 mice are used for training and testing, using cross validation. For formalin fixed C57BL/6 specimens, pSVMRF outperforms both eMRF and MRF. The segmentation accuracy for C57BL/6 brains, stained or not, was similar for larger structures like hippocampus and caudate putamen, (~87%), but increased substantially for smaller regions like susbtantia nigra (from 78.36% to 91.55%), and anterior commissure (from ~50% to ~80%). To test segmentation robustness against increased anatomical variability we add two strains, BXD29 and a transgenic mouse model of Alzheimer's disease. Segmentation accuracy for new strains is 80% for hippocampus, and caudate putamen, indicating that pSVMRF is a promising approach for phenotyping mouse models of human brain disorders. Copyright © 2011 Elsevier Inc. All rights reserved.
Fictitious Domain Methods for Fracture Models in Elasticity.
NASA Astrophysics Data System (ADS)
Court, S.; Bodart, O.; Cayol, V.; Koko, J.
2014-12-01
As surface displacements depend non linearly on sources location and shape, simplifying assumptions are generally required to reduce computation time when inverting geodetic data. We present a generic Finite Element Method designed for pressurized or sheared cracks inside a linear elastic medium. A fictitious domain method is used to take the crack into account independently of the mesh. Besides the possibility of considering heterogeneous media, the approach permits the evolution of the crack through time or more generally through iterations: The goal is to change the less things we need when the crack geometry is modified; In particular no re-meshing is required (the boundary conditions at the level of the crack are imposed by Lagrange multipliers), leading to a gain of computation time and resources with respect to classic finite element methods. This method is also robust with respect to the geometry, since we expect to observe the same behavior whatever the shape and the position of the crack. We present numerical experiments which highlight the accuracy of our method (using convergence curves), the optimality of errors, and the robustness with respect to the geometry (with computation of errors on some quantities for all kind of geometric configurations). We will also provide 2D benchmark tests. The method is then applied to Piton de la Fournaise volcano, considering a pressurized crack - inside a 3-dimensional domain - and the corresponding computation time and accuracy are compared with results from a mixed Boundary element method. In order to determine the crack geometrical characteristics, and pressure, inversions are performed combining fictitious domain computations with a near neighborhood algorithm. Performances are compared with those obtained combining a mixed boundary element method with the same inversion algorithm.
2013-01-01
Background Identifying the emotional state is helpful in applications involving patients with autism and other intellectual disabilities; computer-based training, human computer interaction etc. Electrocardiogram (ECG) signals, being an activity of the autonomous nervous system (ANS), reflect the underlying true emotional state of a person. However, the performance of various methods developed so far lacks accuracy, and more robust methods need to be developed to identify the emotional pattern associated with ECG signals. Methods Emotional ECG data was obtained from sixty participants by inducing the six basic emotional states (happiness, sadness, fear, disgust, surprise and neutral) using audio-visual stimuli. The non-linear feature ‘Hurst’ was computed using Rescaled Range Statistics (RRS) and Finite Variance Scaling (FVS) methods. New Hurst features were proposed by combining the existing RRS and FVS methods with Higher Order Statistics (HOS). The features were then classified using four classifiers – Bayesian Classifier, Regression Tree, K- nearest neighbor and Fuzzy K-nearest neighbor. Seventy percent of the features were used for training and thirty percent for testing the algorithm. Results Analysis of Variance (ANOVA) conveyed that Hurst and the proposed features were statistically significant (p < 0.001). Hurst computed using RRS and FVS methods showed similar classification accuracy. The features obtained by combining FVS and HOS performed better with a maximum accuracy of 92.87% and 76.45% for classifying the six emotional states using random and subject independent validation respectively. Conclusions The results indicate that the combination of non-linear analysis and HOS tend to capture the finer emotional changes that can be seen in healthy ECG data. This work can be further fine tuned to develop a real time system. PMID:23680041
Verification of low-Mach number combustion codes using the method of manufactured solutions
NASA Astrophysics Data System (ADS)
Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz
2007-11-01
Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.
Low-contrast underwater living fish recognition using PCANet
NASA Astrophysics Data System (ADS)
Sun, Xin; Yang, Jianping; Wang, Changgang; Dong, Junyu; Wang, Xinhua
2018-04-01
Quantitative and statistical analysis of ocean creatures is critical to ecological and environmental studies. And living fish recognition is one of the most essential requirements for fishery industry. However, light attenuation and scattering phenomenon are present in the underwater environment, which makes underwater images low-contrast and blurry. This paper tries to design a robust framework for accurate fish recognition. The framework introduces a two stage PCA Network to extract abstract features from fish images. On a real-world fish recognition dataset, we use a linear SVM classifier and set penalty coefficients to conquer data unbalanced issue. Feature visualization results show that our method can avoid the feature distortion in boundary regions of underwater image. Experiments results show that the PCA Network can extract discriminate features and achieve promising recognition accuracy. The framework improves the recognition accuracy of underwater living fishes and can be easily applied to marine fishery industry.
Gujral, Rajinder Singh; Haque, Sk Manirul
2010-01-01
A simple and sensitive UV spectrophotometric method was developed and validated for the simultaneous determination of Potassium Clavulanate (PC) and Amoxicillin Trihydrate (AT) in bulk, pharmaceutical formulations and in human urine samples. The method was linear in the range of 0.2–8.5 μg/ml for PC and 6.4–33.6 μg/ml for AT. The absorbance was measured at 205 and 271 nm for PC and AT respectively. The method was validated with respect to accuracy, precision, specificity, ruggedness, robustness, limit of detection and limit of quantitation. This method was used successfully for the quality assessment of four PC and AT drug products and in human urine samples with good precision and accuracy. This is found to be simple, specific, precise, accurate, reproducible and low cost UV Spectrophotometric method. PMID:23675211
Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.
2017-01-01
Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.
Li, Zhaoying; Zhou, Wenjie; Liu, Hao
2016-09-01
This paper addresses the nonlinear robust tracking controller design problem for hypersonic vehicles. This problem is challenging due to strong coupling between the aerodynamics and the propulsion system, and the uncertainties involved in the vehicle dynamics including parametric uncertainties, unmodeled model uncertainties, and external disturbances. By utilizing the feedback linearization technique, a linear tracking error system is established with prescribed references. For the linear model, a robust controller is proposed based on the signal compensation theory to guarantee that the tracking error dynamics is robustly stable. Numerical simulation results are given to show the advantages of the proposed nonlinear robust control method, compared to the robust loop-shaping control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Statistics based sampling for controller and estimator design
NASA Astrophysics Data System (ADS)
Tenne, Dirk
The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.
NASA Astrophysics Data System (ADS)
Ye, Y.
2017-09-01
This paper presents a fast and robust method for the registration of multimodal remote sensing data (e.g., optical, LiDAR, SAR and map). The proposed method is based on the hypothesis that structural similarity between images is preserved across different modalities. In the definition of the proposed method, we first develop a pixel-wise feature descriptor named Dense Orientated Gradient Histogram (DOGH), which can be computed effectively at every pixel and is robust to non-linear intensity differences between images. Then a fast similarity metric based on DOGH is built in frequency domain using the Fast Fourier Transform (FFT) technique. Finally, a template matching scheme is applied to detect tie points between images. Experimental results on different types of multimodal remote sensing images show that the proposed similarity metric has the superior matching performance and computational efficiency than the state-of-the-art methods. Moreover, based on the proposed similarity metric, we also design a fast and robust automatic registration system for multimodal images. This system has been evaluated using a pair of very large SAR and optical images (more than 20000 × 20000 pixels). Experimental results show that our system outperforms the two popular commercial software systems (i.e. ENVI and ERDAS) in both registration accuracy and computational efficiency.
An exact solution for ideal dam-break floods on steep slopes
Ancey, C.; Iverson, R.M.; Rentschler, M.; Denlinger, R.P.
2008-01-01
The shallow-water equations are used to model the flow resulting from the sudden release of a finite volume of frictionless, incompressible fluid down a uniform slope of arbitrary inclination. The hodograph transformation and Riemann's method make it possible to transform the governing equations into a linear system and then deduce an exact analytical solution expressed in terms of readily evaluated integrals. Although the solution treats an idealized case never strictly realized in nature, it is uniquely well-suited for testing the robustness and accuracy of numerical models used to model shallow-water flows on steep slopes. Copyright 2008 by the American Geophysical Union.
Medvedovici, Andrei; Albu, Florin; Farca, Alexandru; David, Victor
2004-01-27
A new method for the determination of 2-[(dimethylamino)methyl]cyclohexanone (DAMC) in Tramadol (as active substance or active ingredient in pharmaceutical formulations) is described. The method is based on the derivatisation of 2-[(dimethylamino)methyl]cyclohexanone with 2,4-dinitrophenylhydrazine (2,4-DNPH) in acidic conditions followed by a reversed-phase liquid chromatographic separation with UV detection. The method is simple, selective, quantitative and allows the determination of 2-[(dimethylamino)methyl]cyclohexanone at the low ppm level. The proposed method was validated with respect to selectivity, precision, linearity, accuracy and robustness.
Entropy Stable Wall Boundary Conditions for the Compressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Parsani, Matteo; Carpenter, Mark H.; Nielsen, Eric J.
2014-01-01
Non-linear entropy stability and a summation-by-parts framework are used to derive entropy stable wall boundary conditions for the compressible Navier-Stokes equations. A semi-discrete entropy estimate for the entire domain is achieved when the new boundary conditions are coupled with an entropy stable discrete interior operator. The data at the boundary are weakly imposed using a penalty flux approach and a simultaneous-approximation-term penalty technique. Although discontinuous spectral collocation operators are used herein for the purpose of demonstrating their robustness and efficacy, the new boundary conditions are compatible with any diagonal norm summation-by-parts spatial operator, including finite element, finite volume, finite difference, discontinuous Galerkin, and flux reconstruction schemes. The proposed boundary treatment is tested for three-dimensional subsonic and supersonic flows. The numerical computations corroborate the non-linear stability (entropy stability) and accuracy of the boundary conditions.
NASA Technical Reports Server (NTRS)
Parsani, Matteo; Carpenter, Mark H.; Nielsen, Eric J.
2015-01-01
Non-linear entropy stability and a summation-by-parts framework are used to derive entropy stable wall boundary conditions for the three-dimensional compressible Navier-Stokes equations. A semi-discrete entropy estimate for the entire domain is achieved when the new boundary conditions are coupled with an entropy stable discrete interior operator. The data at the boundary are weakly imposed using a penalty flux approach and a simultaneous-approximation-term penalty technique. Although discontinuous spectral collocation operators on unstructured grids are used herein for the purpose of demonstrating their robustness and efficacy, the new boundary conditions are compatible with any diagonal norm summation-by-parts spatial operator, including finite element, finite difference, finite volume, discontinuous Galerkin, and flux reconstruction/correction procedure via reconstruction schemes. The proposed boundary treatment is tested for three-dimensional subsonic and supersonic flows. The numerical computations corroborate the non-linear stability (entropy stability) and accuracy of the boundary conditions.
Targeted ENO schemes with tailored resolution property for hyperbolic conservation laws
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-11-01
In this paper, we extend the range of targeted ENO (TENO) schemes (Fu et al. (2016) [18]) by proposing an eighth-order TENO8 scheme. A general formulation to construct the high-order undivided difference τK within the weighting strategy is proposed. With the underlying scale-separation strategy, sixth-order accuracy for τK in the smooth solution regions is designed for good performance and robustness. Furthermore, a unified framework to optimize independently the dispersion and dissipation properties of high-order finite-difference schemes is proposed. The new framework enables tailoring of dispersion and dissipation as function of wavenumber. The optimal linear scheme has minimum dispersion error and a dissipation error that satisfies a dispersion-dissipation relation. Employing the optimal linear scheme, a sixth-order TENO8-opt scheme is constructed. A set of benchmark cases involving strong discontinuities and broadband fluctuations is computed to demonstrate the high-resolution properties of the new schemes.
Sparse Coding and Counting for Robust Visual Tracking
Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu
2016-01-01
In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474
Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.
Microemulsification: an approach for analytical determinations.
Lima, Renato S; Shiroma, Leandro Y; Teixeira, Alvaro V N C; de Toledo, José R; do Couto, Bruno C; de Carvalho, Rogério M; Carrilho, Emanuel; Kubota, Lauro T; Gobbi, Angelo L
2014-09-16
We address a novel method for analytical determinations that combines simplicity, rapidity, low consumption of chemicals, and portability with high analytical performance taking into account parameters such as precision, linearity, robustness, and accuracy. This approach relies on the effect of the analyte content over the Gibbs free energy of dispersions, affecting the thermodynamic stabilization of emulsions or Winsor systems to form microemulsions (MEs). Such phenomenon was expressed by the minimum volume fraction of amphiphile required to form microemulsion (Φ(ME)), which was the analytical signal of the method. Thus, the measurements can be taken by visually monitoring the transition of the dispersions from cloudy to transparent during the microemulsification, like a titration. It bypasses the employment of electric energy. The performed studies were: phase behavior, droplet dimension by dynamic light scattering, analytical curve, and robustness tests. The reliability of the method was evaluated by determining water in ethanol fuels and monoethylene glycol in complex samples of liquefied natural gas. The dispersions were composed of water-chlorobenzene (water analysis) and water-oleic acid (monoethylene glycol analysis) with ethanol as the hydrotrope phase. The mean hydrodynamic diameter values for the nanostructures in the droplet-based water-chlorobenzene MEs were in the range of 1 to 11 nm. The procedures of microemulsification were conducted by adding ethanol to water-oleic acid (W-O) mixtures with the aid of micropipette and shaking. The Φ(ME) measurements were performed in a thermostatic water bath at 23 °C by direct observation that is based on the visual analyses of the media. The experiments to determine water demonstrated that the analytical performance depends on the composition of ME. It shows flexibility in the developed method. The linear range was fairly broad with limits of linearity up to 70.00% water in ethanol. For monoethylene glycol in water, in turn, the linear range was observed throughout the volume fraction of analyte. The best limits of detection were 0.32% v/v water to ethanol and 0.30% v/v monoethylene glycol to water. Furthermore, the accuracy was highly satisfactory. The natural gas samples provided by the Petrobras exhibited color, particulate material, high ionic strength, and diverse compounds as metals, carboxylic acids, and anions. These samples had a conductivity of up to 2630 μS cm(-1); the conductivity of pure monoethylene glycol was only 0.30 μS cm(-1). Despite such downsides, the method allowed accurate measures bypassing steps such as extraction, preconcentration, and dilution of the sample. In addition, the levels of robustness were promising. This parameter was evaluated by investigating the effect of (i) deviations in volumetric preparation of the dispersions and (ii) changes in temperature over the analyte contents recorded by the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blais, AR; Dekaban, M; Lee, T-Y
2014-08-15
Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.
de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo
2018-03-01
Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.
Robust characterization of small grating boxes using rotating stage Mueller matrix polarimeter
NASA Astrophysics Data System (ADS)
Foldyna, M.; De Martino, A.; Licitra, C.; Foucher, J.
2010-03-01
In this paper we demonstrate the robustness of the Mueller matrix polarimetry used in multiple-azimuth configuration. We first demonstrate the efficiency of the method for the characterization of small pitch gratings filling 250 μm wide square boxes. We used a Mueller matrix polarimeter directly installed in the clean room has motorized rotating stage allowing the access to arbitrary conical grating configurations. The projected beam spot size could be reduced to 60x25 μm, but for the measurements reported here this size was 100x100 μm. The optimal values of parameters of a trapezoidal profile model, acquired for each azimuthal angle separately using a non-linear least-square minimization algorithm, are shown for a typical grating. Further statistical analysis of the azimuth-dependent dimensional parameters provided realistic estimates of the confidence interval giving direct information about the accuracy of the results. The mean values and the standard deviations were calculated for 21 different grating boxes featuring in total 399 measured spectra and fits. The results for all boxes are summarized in a table which compares the optical method to the 3D-AFM. The essential conclusion of our work is that the 3D-AFM values always fall into the confidence intervals provided by the optical method, which means that we have successfully estimated the accuracy of our results without using direct comparison with another, non-optical, method. Moreover, this approach may provide a way to improve the accuracy of grating profile modeling by minimizing the standard deviations evaluated from multiple-azimuths results.
Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements
Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.
2016-01-01
Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037
Shah, R B; Bryant, A; Collier, J; Habib, M J; Khan, M A
2008-08-06
A simple, sensitive, accurate, and robust stability indicating analytical method is presented for identification, separation, and quantitation of l-thyroxine and eight degradation impurities with an internal standard. The method was used in the presence of commonly used formulation excipients such as butylated hydroxyanisole, povidone, crospovidone, croscarmellose sodium, mannitol, sucrose, acacia, lactose monohydrate, confectionary sugar, microcrystalline cellulose, sodium laurel sulfate, magnesium stearate, talc, and silicon dioxide. The two active thyroid hormones: 3,3',5,5'-tetra-iodo-l-thyronine (l-thyroxine-T4) and 3,3',5-tri-iodo-l-thyronine (T3) and degradation products including di-iodothyronine (T2), thyronine (T0), tyrosine (Tyr), di-iodotyrosine (DIT), mono-iodotyrosine (MIT), 3,3',5,5'-tetra-iodothyroacetic acid (T4AA) and 3,3',5-tri-iodothyroacetic acid (T3AA) were assayed by the current method. The separation of l-thyroxine and eight metabolites along with theophylline (internal standard) was achieved using a C18 column (25 degrees C) with a mobile phase of trifluoroacetic acid (0.1%, v/v, pH 3)-acetonitrile in gradient elution at 0.8 ml/min at 223 nm. The sample diluent was 0.01 M methanolic NaOH. Method was validated according to FDA, USP, and ICH guidelines for inter-day accuracy, precision, and robustness after checking performance with system suitability. Tyr (4.97 min), theophylline (9.09 min), MIT (9.55 min), DIT (11.37 min), T0 (11.63 min), T2 (14.47 min), T3 (16.29 min), T4 (17.60 min), T3AA (22.71 min), and T4AA (24.83 min) separated in a single chromatographic run. Linear relationship (r2>0.99) was observed between the peak area ratio and the concentrations for all of the compounds within the range of 2-20 microg/ml. The total time for analysis, equilibration and recovery was 40 min. The method was shown to separate well from commonly employed formulation excipients. Accuracy ranged from 95 to 105% for T4 and 90 to 110% for all other compounds. Precision was <2% for all the compounds. The method was found to be robust with minor changes in injection volume, flow rate, column temperature, and gradient ratio. Validation results indicated that the method shows satisfactory linearity, precision, accuracy, and ruggedness and also stress degradation studies indicated that the method can be used as stability indicating method for l-thyroxine in the presence of excipients.
Control design for robust stability in linear regulators: Application to aerospace flight control
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1986-01-01
Time domain stability robustness analysis and design for linear multivariable uncertain systems with bounded uncertainties is the central theme of the research. After reviewing the recently developed upper bounds on the linear elemental (structured), time varying perturbation of an asymptotically stable linear time invariant regulator, it is shown that it is possible to further improve these bounds by employing state transformations. Then introducing a quantitative measure called the stability robustness index, a state feedback conrol design algorithm is presented for a general linear regulator problem and then specialized to the case of modal systems as well as matched systems. The extension of the algorithm to stochastic systems with Kalman filter as the state estimator is presented. Finally an algorithm for robust dynamic compensator design is presented using Parameter Optimization (PO) procedure. Applications in a aircraft control and flexible structure control are presented along with a comparison with other existing methods.
A comparative robustness evaluation of feedforward neurofilters
NASA Technical Reports Server (NTRS)
Troudet, Terry; Merrill, Walter
1993-01-01
A comparative performance and robustness analysis is provided for feedforward neurofilters trained with back propagation to filter additive white noise. The signals used in this analysis are simulated pitch rate responses to typical pilot command inputs for a modern fighter aircraft model. Various configurations of nonlinear and linear neurofilters are trained to estimate exact signal values from input sequences of noisy sampled signal values. In this application, nonlinear neurofiltering is found to be more efficient than linear neurofiltering in removing the noise from responses of the nominal vehicle model, whereas linear neurofiltering is found to be more robust in the presence of changes in the vehicle dynamics. The possibility of enhancing neurofiltering through hybrid architectures based on linear and nonlinear neuroprocessing is therefore suggested as a way of taking advantage of the robustness of linear neurofiltering, while maintaining the nominal performance advantage of nonlinear neurofiltering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
2016-06-15
Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
A portable integrated system to control an active needle
NASA Astrophysics Data System (ADS)
Konh, Bardia; Motalleb, Mahdi; Ashrafiuon, Hashem
2017-04-01
The primary objective of this work is to introduce an integrated portable system to operate a flexible active surgical needle with actuation capabilities. The smart needle uses the robust actuation capabilities of the shape memory alloy wires to drastically improve the accuracy of in medical procedures such as brachytherapy. This, however, requires an integrated system aimed to control the insertion of the needle via a linear motor and its deflection by the SMA wire in real-time. The integrated system includes a flexible needle prototype, a Raspberry Pi computer, a linear stage motor, an SMA wire actuator, a power supply, electromagnetic tracking system, and various communication supplies. The linear stage motor guides the needle into tissue. The power supply provides appropriate current to the SMA actuator. The tracking system measures tip movement for feedback, The Raspberry Pi is the central tool that receives the tip movement feedback and controls the linear stage motor and the SMA actuator via the power supply. The implemented algorithms required for communication and feedback control are also described. This paper demonstrates that the portable integrated system may be a viable solution for more effective procedures requiring surgical needles.
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2011-01-01
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali
2011-04-15
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less
NASA Astrophysics Data System (ADS)
Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.
2016-06-01
For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with the DGNSS solution to better than 2.9 cm RMSE Horizontal and 5.5 cm RMSE Vertical. Such accuracies are sufficient to meet the requirements for a majority of airborne mapping applications.
Vongsak, Boonyadist; Sithisarn, Pongtip; Gritsanapan, Wandee
2013-01-01
Moringa oleifera Lamarck (Moringaceae) is used as a multipurpose medicinal plant for the treatment of various diseases. Isoquercetin, astragalin, and crypto-chlorogenic acid have been previously found to be major active components in the leaves of this plant. In this study, a thin-layer-chromatography (TLC-)densitometric method was developed and validated for simultaneous quantification of these major components in the 70% ethanolic extracts of M. oleifera leaves collected from 12 locations. The average amounts of crypto-chlorogenic acid, isoquercetin, and astragalin were found to be 0.0473, 0.0427, and 0.0534% dry weight, respectively. The method was validated for linearity, precision, accuracy, limit of detection, limit of quantitation, and robustness. The linearity was obtained in the range of 100-500 ng/spot with a correlation coefficient (r) over 0.9961. Intraday and interday precisions demonstrated relative standard deviations of less than 5%. The accuracy of the method was confirmed by determining the recovery. The average recoveries of each component from the extracts were in the range of 98.28 to 99.65%. Additionally, the leaves from Chiang Mai province contained the highest amounts of all active components. The proposed TLC-densitometric method was simple, accurate, precise, and cost-effective for routine quality controlling of M. oleifera leaf extracts.
Patil, Suyog S; Srivastava, Ashwini K
2013-01-01
A simple, precise, and rapid RPLC method has been developed without incorporation of any ion-pair reagent for the simultaneous determination of vitamin C (C) and seven B-complex vitamins, viz, thiamine hydrochloride (B1), pyridoxine hydrochloride (B6), nicotinamide (B3), cyanocobalamine (B12), folic acid, riboflavin (B2), and 4-aminobenzoic acid (Bx). Separations were achieved within 12.0 min at 30 degrees C by gradient elution on an RP C18 column using a mobile phase consisting of a mixture of 15 mM ammonium formate buffer and 0.1% triethylamine adjusted to pH 4.0 with formic acid and acetonitrile. Simultaneous UV detection was performed at 275 and 360 nm. The method was validated for system suitability, LOD, LOQ, linearity, precision, accuracy, specificity, and robustness in accordance with International Conference on Harmonization guidelines. The developed method was implemented successfully for determination of the aforementioned vitamins in pharmaceutical formulations containing an individual vitamin, in their multivitamin combinations, and in human urine samples. The calibration curves for all analytes showed good linearity, with coefficients of correlation higher than 0.9998. Accuracy, intraday repeatability (n = 6), and interday repeatability (n = 7) were found to be satisfactory.
Sadeghi, Fahimeh; Navidpour, Latifeh; Bayat, Sima; Afshar, Minoo
2013-01-01
A green, simple, and stability-indicating RP-HPLC method was developed for the determination of diltiazem in topical preparations. The separation was based on a C18 analytical column using a mobile phase consisted of ethanol: phosphoric acid solution (pH = 2.5) (35 : 65, v/v). Column temperature was set at 50°C and quantitation was achieved with UV detection at 240 nm. In forced degradation studies, the drug was subjected to oxidation, hydrolysis, photolysis, and heat. The method was validated for specificity, selectivity, linearity, precision, accuracy, and robustness. The applied procedure was found to be linear in diltiazem concentration range of 0.5–50 μg/mL (r 2 = 0.9996). Precision was evaluated by replicate analysis in which % relative standard deviation (RSD) values for areas were found below 2.0. The recoveries obtained (99.25%–101.66%) ensured the accuracy of the developed method. The degradation products as well as the pharmaceutical excipients were well resolved from the pure drug. The expanded uncertainty (5.63%) of the method was also estimated from method validation data. Accordingly, the proposed validated and sustainable procedure was proved to be suitable for routine analyzing and stability studies of diltiazem in pharmaceutical preparations. PMID:24163778
An integration of minimum local feature representation methods to recognize large variation of foods
NASA Astrophysics Data System (ADS)
Razali, Mohd Norhisham bin; Manshor, Noridayu; Halin, Alfian Abdul; Mustapha, Norwati; Yaakob, Razali
2017-10-01
Local invariant features have shown to be successful in describing object appearances for image classification tasks. Such features are robust towards occlusion and clutter and are also invariant against scale and orientation changes. This makes them suitable for classification tasks with little inter-class similarity and large intra-class difference. In this paper, we propose an integrated representation of the Speeded-Up Robust Feature (SURF) and Scale Invariant Feature Transform (SIFT) descriptors, using late fusion strategy. The proposed representation is used for food recognition from a dataset of food images with complex appearance variations. The Bag of Features (BOF) approach is employed to enhance the discriminative ability of the local features. Firstly, the individual local features are extracted to construct two kinds of visual vocabularies, representing SURF and SIFT. The visual vocabularies are then concatenated and fed into a Linear Support Vector Machine (SVM) to classify the respective food categories. Experimental results demonstrate impressive overall recognition at 82.38% classification accuracy based on the challenging UEC-Food100 dataset.
Srivastava, Pooja; Tiwari, Neerja; Yadav, Akhilesh K; Kumar, Vijendra; Shanker, Karuna; Verma, Ram K; Gupta, Madan M; Gupta, Anil K; Khanuja, Suman P S
2008-01-01
This paper describes a sensitive, selective, specific, robust, and validated densitometric high-performance thin-layer chromatographic (HPTLC) method for the simultaneous determination of 3 key withanolides, namely, withaferin-A, 12-deoxywithastramonolide, and withanolide-A, in Ashwagandha (Withania somnifera) plant samples. The separation was performed on aluminum-backed silica gel 60F254 HPTLC plates using dichloromethane-methanol-acetone-diethyl ether (15 + 1 + 1 + 1, v/v/v/v) as the mobile phase. The withanolides were quantified by densitometry in the reflection/absorption mode at 230 nm. Precise and accurate quantification could be performed in the linear working concentration range of 66-330 ng/band with good correlation (r2 = 0.997, 0.999, and 0.996, respectively). The method was validated for recovery, precision, accuracy, robustness, limit of detection, limit of quantitation, and specificity according to International Conference on Harmonization guidelines. Specificity of quantification was confirmed using retention factor (Rf) values, UV-Vis spectral correlation, and electrospray ionization mass spectra of marker compounds in sample tracks.
Novel robust skylight compass method based on full-sky polarization imaging under harsh conditions.
Tang, Jun; Zhang, Nan; Li, Dalin; Wang, Fei; Zhang, Binzhen; Wang, Chenguang; Shen, Chong; Ren, Jianbin; Xue, Chenyang; Liu, Jun
2016-07-11
A novel method based on Pulse Coupled Neural Network(PCNN) algorithm for the highly accurate and robust compass information calculation from the polarized skylight imaging is proposed,which showed good accuracy and reliability especially under cloudy weather,surrounding shielding and moon light. The degree of polarization (DOP) combined with the angle of polarization (AOP), calculated from the full sky polarization image, were used for the compass information caculation. Due to the high sensitivity to the environments, DOP was used to judge the destruction of polarized information using the PCNN algorithm. Only areas with high accuracy of AOP were kept after the DOP PCNN filtering, thereby greatly increasing the compass accuracy and robustness. From the experimental results, it was shown that the compass accuracy was 0.1805° under clear weather. This method was also proven to be applicable under conditions of shielding by clouds, trees and buildings, with a compass accuracy better than 1°. With weak polarization information sources, such as moonlight, this method was shown experimentally to have an accuracy of 0.878°.
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-01-01
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-07-12
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.
Robust H(∞) positional control of 2-DOF robotic arm driven by electro-hydraulic servo system.
Guo, Qing; Yu, Tian; Jiang, Dan
2015-11-01
In this paper an H∞ positional feedback controller is developed to improve the robust performance under structural and parametric uncertainty disturbance in electro-hydraulic servo system (EHSS). The robust control model is described as the linear state-space equation by upper linear fractional transformation. According to the solution of H∞ sub-optimal control problem, the robust controller is designed and simplified to lower order linear model which is easily realized in EHSS. The simulation and experimental results can validate the robustness of this proposed method. The comparison result with PI control shows that the robust controller is suitable for this EHSS under the critical condition where the desired system bandwidth is higher and the external load of the hydraulic actuator is closed to its limited capability. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. Wallace; Obland, Michael D.; Ismail, Syed
2014-01-01
Global atmospheric carbon dioxide (CO2) measurements through the Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) Decadal Survey recommended space mission are critical for improving our understanding of CO2 sources and sinks. IM-CW (Intensity Modulated Continuous Wave) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS science requirements. In previous laboratory and flight experiments we have successfully used linear swept frequency modulation to discriminate surface lidar returns from intermediate aerosol and cloud contamination. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate clouds, which is a requirement for the inversion of the CO2 column-mixing ratio from the instrument optical depth measurements, has been demonstrated with the linear swept frequency modulation technique. We are concurrently investigating advanced techniques to help improve the auto-correlation properties of the transmitted waveform implemented through physical hardware to make cloud rejection more robust in special restricted scenarios. Several different carrier based modulation techniques are compared including orthogonal linear swept, orthogonal non-linear swept, and Binary Phase Shift Keying (BPSK). Techniques are investigated that reduce or eliminate sidelobes. These techniques have excellent auto-correlation properties while possessing a finite bandwidth (by way of a new cyclic digital filter), which will reduce bias error in the presence of multiple scatterers. Our analyses show that the studied modulation techniques can increase the accuracy of CO2 column measurements from space. A comparison of various properties such as signal to noise ratio (SNR) and time-bandwidth product are discussed.
Accuracy of active chirp linearization for broadband frequency modulated continuous wave ladar.
Barber, Zeb W; Babbitt, Wm Randall; Kaylor, Brant; Reibel, Randy R; Roos, Peter A
2010-01-10
As the bandwidth and linearity of frequency modulated continuous wave chirp ladar increase, the resulting range resolution, precisions, and accuracy are improved correspondingly. An analysis of a very broadband (several THz) and linear (<1 ppm) chirped ladar system based on active chirp linearization is presented. Residual chirp nonlinearity and material dispersion are analyzed as to their effect on the dynamic range, precision, and accuracy of the system. Measurement precision and accuracy approaching the part per billion level is predicted.
A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong Luo; Yidong Xia; Robert Nourgaliev
2011-05-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less
NASA Astrophysics Data System (ADS)
Zhang, Langwen; Xie, Wei; Wang, Jingcheng
2017-11-01
In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.
Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics
Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris
2016-01-01
As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314
Pan, Duohai; Crull, George; Yin, Shawn; Grosso, John
2014-02-01
Avalide(@), a medication used for the treatment of hypertension, is a combination of Irbesartan, and Hydrochlorothiazide. Irbesartan, one of the active pharmaceutical ingredients (API) in Avalide products, exists in two neat crystalline forms: Form A and Form B. Irbesartan Form A is the API form used in a wet granulation for the preparation of Avalide tablets. The presence of the less soluble Irbesartan Form B in Avalide tablets may result in the slower dissolution. In this paper, we have presented our work on the method development, verification and challenges of quantitatively detecting, via NIR and ssNMR, very small amounts of Irbesartan Form B in Avalide tablets. As part of the NIR method development and qualification, limit of detection, linearity and accuracy were examined. In addition, a limited study of the robustness of the method was conducted and a bias in the level of Form B was correlated to the ambient humidity. ssNMR, a primary method for the determination of polymorphic composition, was successfully used as an orthogonal technique to verify the accuracy of the NIR method and added to the confidence in the NIR method. The speed and efficiency of the NIR method make it a suitable and convenient tool for routine analysis of Avalide tablets for Form B in a QC environment. Copyright © 2013 Elsevier B.V. All rights reserved.
Computational Aeroacoustics by the Space-time CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.
2001-01-01
In recent years, a new numerical methodology for conservation laws-the Space-Time Conservation Element and Solution Element Method (CE/SE), was developed by Dr. Chang of NASA Glenn Research Center and collaborators. In nature, the new method may be categorized as a finite volume method, where the conservation element (CE) is equivalent to a finite control volume (or cell) and the solution element (SE) can be understood as the cell interface. However, due to its rigorous treatment of the fluxes and geometry, it is different from the existing schemes. The CE/SE scheme features: (1) space and time treated on the same footing, the integral equations of conservation laws are solve( for with second order accuracy, (2) high resolution, low dispersion and low dissipation, (3) novel, truly multi-dimensional, simple but effective non-reflecting boundary condition, (4) effortless implementation of computation, no numerical fix or parameter choice is needed, an( (5) robust enough to cover a wide spectrum of compressible flow: from weak linear acoustic waves to strong, discontinuous waves (shocks) appropriate for linear and nonlinear aeroacoustics. Currently, the CE/SE scheme has been developed to such a stage that a 3-13 unstructured CE/SE Navier-Stokes solver is already available. However, in the present paper, as a general introduction to the CE/SE method, only the 2-D unstructured Euler CE/SE solver is chosen as a prototype and is sketched in Section 2. Then applications of the CE/SE scheme to linear, nonlinear aeroacoustics and airframe noise are depicted in Sections 3, 4, and 5 respectively to demonstrate its robustness and capability.
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
NASA Astrophysics Data System (ADS)
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Objective. Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. Approach. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Main results. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. Significance. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
Real-time scene and signature generation for ladar and imaging sensors
NASA Astrophysics Data System (ADS)
Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios
2014-05-01
This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kupferman, R.
The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.
Stable and low diffusive hybrid upwind splitting methods
NASA Technical Reports Server (NTRS)
Coquel, Frederic; Liou, Meng-Sing
1992-01-01
We introduce in this paper a new concept for upwinding: the Hybrid Upwind Splitting (HUS). This original strategy for upwinding is achieved by combining the two previous existing approaches, the Flux Vector (FVS) and Flux Difference Splittings (FDS), while retaining their own interesting features. Indeed, our approach yields upwind methods that share the robustness of FVS schemes in the capture of nonlinear waves and the accuracy of some FDS schemes in the capture of linear waves. We describe here some examples of such HUS methods obtained by hybridizing the Osher approach with FVS schemes. Numerical illustrations are displayed and will prove in particular the relevance of the HUS methods we propose for viscous calculations.
Dixit, Shuchi; Dubey, Rituraj; Bhushan, Ravi
2014-01-01
Enantioresolution of four anti-ulcer drugs (chiral sulfoxides), namely, omeprazole, rabeprazole, lansoprazole and pantoprazole, was carried out by high-performance liquid chromatography using a polysaccharide-based chiral stationary phase consisting of monochloromethylated cellulose (Lux cellulose-2) under normal and polar-organic-phase conditions with ultraviolet detection at 285 nm. The method was validated for linearity, accuracy, precision, robustness and limit of detection. The optimized enantioresolution method was compared for both the elution modes. The optimized method was further utilized to check the enantiomeric purity of dexrabeprazole. Copyright © 2013 John Wiley & Sons, Ltd.
Automatic Image Registration of Multimodal Remotely Sensed Data with Global Shearlet Features
NASA Technical Reports Server (NTRS)
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2015-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.
Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features
Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.
2017-01-01
Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone. PMID:29123329
Robust Eye Center Localization through Face Alignment and Invariant Isocentric Patterns
Teng, Dongdong; Chen, Dihu; Tan, Hongzhou
2015-01-01
The localization of eye centers is a very useful cue for numerous applications like face recognition, facial expression recognition, and the early screening of neurological pathologies. Several methods relying on available light for accurate eye-center localization have been exploited. However, despite the considerable improvements that eye-center localization systems have undergone in recent years, only few of these developments deal with the challenges posed by the profile (non-frontal face). In this paper, we first use the explicit shape regression method to obtain the rough location of the eye centers. Because this method extracts global information from the human face, it is robust against any changes in the eye region. We exploit this robustness and utilize it as a constraint. To locate the eye centers accurately, we employ isophote curvature features, the accuracy of which has been demonstrated in a previous study. By applying these features, we obtain a series of eye-center locations which are candidates for the actual position of the eye-center. Among these locations, the estimated locations which minimize the reconstruction error between the two methods mentioned above are taken as the closest approximation for the eye centers locations. Therefore, we combine explicit shape regression and isophote curvature feature analysis to achieve robustness and accuracy, respectively. In practical experiments, we use BioID and FERET datasets to test our approach to obtaining an accurate eye-center location while retaining robustness against changes in scale and pose. In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works. Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy. According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz. PMID:26426929
High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization
1992-05-01
High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization A Thesis Presented by Louis Joseph PoehIman, Captain, USAF B.S., U.S. Air...High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization by Louis Joseph Poehlman, Captain, USAF Submitted to the Department of...31 2-4 Attitude Determination and Control System Architecture ................. 33 3-1 Exact Linearization Using Nonlinear Feedback
A Reconstructed Discontinuous Galerkin Method for the Euler Equations on Arbitrary Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong Luo; Luqing Luo; Robert Nourgaliev
2012-11-01
A reconstruction-based discontinuous Galerkin (RDG(P1P2)) method, a variant of P1P2 method, is presented for the solution of the compressible Euler equations on arbitrary grids. In this method, an in-cell reconstruction, designed to enhance the accuracy of the discontinuous Galerkin method, is used to obtain a quadratic polynomial solution (P2) from the underlying linear polynomial (P1) discontinuous Galerkin solution using a least-squares method. The stencils used in the reconstruction involve only the von Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The developed RDG method is used to compute a variety of flow problems onmore » arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG(P1P2) method is third-order accurate, and outperforms the third-order DG method (DG(P2)) in terms of both computing costs and storage requirements.« less
Optimization-Based Robust Nonlinear Control
2006-08-01
ABSTRACT New control algorithms were developed for robust stabilization of nonlinear dynamical systems . Novel, linear matrix inequality-based synthesis...was to further advance optimization-based robust nonlinear control design, for general nonlinear systems (especially in discrete time ), for linear...Teel, IEEE Transactions on Control Systems Technology, vol. 14, no. 3, p. 398-407, May 2006. 3. "A unified framework for input-to-state stability in
Kamalandua, Aubeline
2015-01-01
Age estimation from DNA methylation markers has seen an exponential growth of interest, not in the least from forensic scientists. The current published assays, however, can still be improved by lowering the number of markers in the assay and by providing more accurate models to predict chronological age. From the published literature we selected 4 age-associated genes (ASPA, PDE4C, ELOVL2, and EDARADD) and determined CpG methylation levels from 206 blood samples of both deceased and living individuals (age range: 0–91 years). This data was subsequently used to compare prediction accuracy with both linear and non-linear regression models. A quadratic regression model in which the methylation levels of ELOVL2 were squared showed the highest accuracy with a Mean Absolute Deviation (MAD) between chronological age and predicted age of 3.75 years and an adjusted R2 of 0.95. No difference in accuracy was observed for samples obtained either from living and deceased individuals or between the 2 genders. In addition, 29 teeth from different individuals (age range: 19–70 years) were analyzed using the same set of markers resulting in a MAD of 4.86 years and an adjusted R2 of 0.74. Cross validation of the results obtained from blood samples demonstrated the robustness and reproducibility of the assay. In conclusion, the set of 4 CpG DNA methylation markers is capable of producing highly accurate age predictions for blood samples from deceased and living individuals PMID:26280308
Ashfaq, Maria; Sial, Ali Akber; Bushra, Rabia; Rehman, Atta-Ur; Baig, Mirza Tasawur; Huma, Ambreen; Ahmed, Maryam
2018-01-01
Spectrophotometric technique is considered to be the simplest and operator friendly among other available analytical methods for pharmaceutical analysis. The objective of the study was to develop a precise, accurate and rapid UV-spectrophotometric method for the estimation of chlorpheniramine maleate (CPM) in pure and solid pharmaceutical formulation. Drug absorption was measured in various solvent systems including 0.1N HCl (pH 1.2), acetate buffer (pH 4.5), phosphate buffer (pH 6.8) and distil water (pH 7.0). Method validation was performed as per official guidelines of ICH, 2005. High drug absorption was observed in 0.1N HCl medium with λ max of 261nm. The drug showed the good linearity from 20 to 60μg/mL solution concentration with the correlation coefficient linear regression equation Y= 0.1853 X + 0.1098 presenting R 2 value of 0.9998. The method accuracy was evaluated by the percent drug recovery, presents more than 99% drug recovery at three different levels assessed. The % RSD value <1 was computed for inter and intraday analysis indicating the high accuracy and precision of the developed technique. The developed method is robust because it shows no any significant variation in with minute changes. The LOD and LOQ values were assessed to be 2.2μg/mL and 6.6μg/mL respectively. The investigated method proved its sensitivity, precision and accuracy hence could be successfully used to estimate the CPM content in bulk and pharmaceutical matrix tablets.
NASA Astrophysics Data System (ADS)
Raghu, M. S.; Basavaiah, K.; Ramesh, P. J.; Abdulrahman, Sameer A. M.; Vinay, K. B.
2012-03-01
A sensitive, precise, and cost-effective UV-spectrophotometric method is described for the determination of pheniramine maleate (PAM) in bulk drug and tablets. The method is based on the measurement of absorbance of a PAM solution in 0.1 N HCl at 264 nm. As per the International Conference on Harmonization (ICH) guidelines, the method was validated for linearity, accuracy, precision, limits of detection (LOD) and quantification (LOQ), and robustness and ruggedness. A linear relationship between absorbance and concentration of PAM in the range of 2-40 μg/ml with a correlation coefficient (r) of 0.9998 was obtained. The LOD and LOQ values were found to be 0.18 and 0.39 μg/ml PAM, respectively. The precision of the method was satisfactory: the value of relative standard deviation (RSD) did not exceed 3.47%. The proposed method was applied successfully to the determination of PAM in tablets with good accuracy and precision. Percentages of the label claims ranged from 101.8 to 102.01% with the standard deviation (SD) from 0.64 to 0.72%. The accuracy of the method was further ascertained by recovery studies via a standard addition procedure. In addition, the forced degradation of PAM was conducted in accordance with the ICH guidelines. Acidic and basic hydrolysis, thermal stress, peroxide, and photolytic degradation were used to assess the stability-indicating power of the method. A substantial degradation was observed during oxidative and alkaline degradations. No degradation was observed under other stress conditions.
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models
de Jesus, Karla; Ayala, Helon V. H.; de Jesus, Kelly; Coelho, Leandro dos S.; Medeiros, Alexandre I.A.; Abraldes, José A.; Vaz, Mário A.P.; Fernandes, Ricardo J.; Vilas-Boas, João Paulo
2018-01-01
Abstract Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances. PMID:29599857
Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions
NASA Astrophysics Data System (ADS)
Huang, Zhi; Fan, Baozheng; Song, Xiaolin
2018-03-01
As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.
Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.
Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A
2016-03-21
Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza
2018-03-01
In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.
NASA Astrophysics Data System (ADS)
Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg
2015-05-01
In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.
Shurbaji, Maher; Abu Al Rub, Mohamad H; Saket, Munib M; Qaisi, Ali M; Salim, Maher L; Abu-Nameh, Eyad S M
2010-01-01
A rapid, simple, and sensitive RP-HPLC analytical method was developed for the simultaneous determination of triclabendazole and ivermectin in combination using a C18 RP column. The mobile phase was acetonitrile-methanol-water-acetic acid (56 + 36 + 7.5 + 0.5, v/v/v/v) at a pH of 4.35 and flow rate of 1.0 mL/min. A 245 nm UV detection wavelength was used. Complete validation, including linearity, accuracy, recovery, LOD, LOQ, precision, robustness, stability, and peak purity, was performed. The calibration curve was linear over the range 50.09-150.26 microg/mL for triclabendazole with r = 0.9999 and 27.01-81.02 microg/mL for ivermectin with r = 0.9999. Calculated LOD and LOQ for triclabendazole were 0.03 and 0.08 microg/mL, respectively, and for ivermectin 0.07 and 0.20 microg/mL, respectively. The intraday precision obtained was 98.71% with RSD of 0.87% for triclabendazole and 100.79% with RSD 0.73% for ivermectin. The interday precision obtained was 99.51% with RSD of 0.35% for triclabendazole and 100.55% with RSD of 0.59% for ivermectin. Robustness was also studied, and there was no significant variation of the system suitability of the analytical method with small changes in experimental parameters.
NASA Astrophysics Data System (ADS)
Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.
2018-02-01
While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.
Lyu, Weiwei; Cheng, Xianghong
2017-11-28
Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method.
Karasakal, A; Ulu, S T
2014-05-01
A novel, sensitive and selective spectrofluorimetric method was developed for the determination of tamsulosin in spiked human urine and pharmaceutical preparations. The proposed method is based on the reaction of tamsulosin with 1-dimethylaminonaphthalene-5-sulfonyl chloride in carbonate buffer pH 10.5 to yield a highly fluorescent derivative. The described method was validated and the analytical parameters of linearity, limit of detection (LOD), limit of quantification (LOQ), accuracy, precision, recovery and robustness were evaluated. The proposed method showed a linear dependence of the fluorescence intensity on drug concentration over the range 1.22 × 10(-7) to 7.35 × 10(-6) M. LOD and LOQ were calculated as 1.07 × 10(-7) and 3.23 × 10(-7) M, respectively. The proposed method was successfully applied for the determination of tamsulosin in pharmaceutical preparations and the obtained results were in good agreement with those obtained using the reference method. Copyright © 2013 John Wiley & Sons, Ltd.
Alam, Prawez; Foudah, Ahmed I.; Zaatout, Hala H.; T, Kamal Y; Abdel-Kader, Maged S.
2017-01-01
Background: A simple and sensitive thin-layer chromatographic method has been established for quantification of glycyrrhizin in Glycyrrhiza glabra rhizome and baby herbal formulations by validated Reverse Phase HPTLC method. Materials and Methods: RP-HPTLC Method was carried out using glass coated with RP-18 silica gel 60 F254S HPTLC plates using methanol-water (7: 3 v/v) as mobile phase. Results: The developed plate was scanned and quantified densitometrically at 256 nm. Glycyrrhizin peaks from Glycyrrhiza glabra rhizome and baby herbal formulations were identified by comparing their single spot at Rf = 0.63 ± 0.01. Linear regression analysis revealed a good linear relationship between peak area and amount of glycyrrhizin in the range of 2000-7000 ng/band. Conclusion: The method was validated, in accordance with ICH guidelines for precision, accuracy, and robustness. The proposed method will be useful to enumerate the therapeutic dose of glycyrrhizin in herbal formulations as well as in bulk drug. PMID:28573236
Liu, Cui-Ting; Zhang, Min; Yan, Ping; Liu, Hai-Chan; Liu, Xing-Yun; Zhan, Ruo-Ting
2016-01-01
Zhengtian pills (ZTPs) are traditional Chinese medicine (TCM) which have been commonly used to treat headaches. Volatile components of ZTPs extracted by ethyl acetate with an ultrasonic method were analyzed by gas chromatography mass spectrometry (GC-MS). Twenty-two components were identified, accounting for 78.884% of the total components of volatile oil. The three main volatile components including protocatechuic acid, ferulic acid, and ligustilide were simultaneously determined using ultra-high performance liquid chromatography coupled with diode array detection (UHPLC-DAD). Baseline separation was achieved on an XB-C18 column with linear gradient elution of methanol-0.2% acetic acid aqueous solution. The UHPLC-DAD method provided good linearity (R (2) ≥ 0.9992), precision (RSD < 3%), accuracy (100.68-102.69%), and robustness. The UHPLC-DAD/GC-MS method was successfully utilized to analyze volatile components, protocatechuic acid, ferulic acid, and ligustilide, in 13 batches of ZTPs, which is suitable for discrimination and quality assessment of ZTPs.
Kovács, Béla; Kántor, Lajos Kristóf; Croitoru, Mircea Dumitru; Kelemen, Éva Katalin; Obreja, Mona; Nagy, Előd Ernő; Székely-Szentmiklósi, Blanka; Gyéresi, Árpád
2018-06-01
A reverse-phase HPLC (RP-HPLC) method was developed for strontium ranelate using a full factorial, screening experimental design. The analytical procedure was validated according to international guidelines for linearity, selectivity, sensitivity, accuracy and precision. A separate experimental design was used to demonstrate the robustness of the method. Strontium ranelate was eluted at 4.4 minutes and showed no interference with the excipients used in the formulation, at 321 nm. The method is linear in the range of 20-320 μg mL-1 (R2 = 0.99998). Recovery, tested in the range of 40-120 μg mL-1, was found to be 96.1-102.1 %. Intra-day and intermediate precision RSDs ranged from 1.0-1.4 and 1.2-1.4 %, resp. The limit of detection and limit of quantitation were 0.06 and 0.20 μg mL-1, resp. The proposed technique is fast, cost-effective, reliable and reproducible, and is proposed for the routine analysis of strontium ranelate.
EL-Houssini, Ola M.; Zawilla, Nagwan H.; Mohammad, Mohammad A.
2013-01-01
Specific stability indicating reverse-phase liquid chromatography (RP-LC) assay method (SIAM) was developed for the determination of cinnarizine (Cinn)/piracetam (Pira) and cinnarizine (Cinn)/heptaminol acefyllinate (Hept) in the presence of the reported degradation products of Cinn. A C18 column and gradient mobile phase was applied for good resolution of all peaks. The detection was achieved at 210 nm and 254 nm for Cinn/Pira and Cinn/Hept, respectively. The responses were linear over concentration ranges of 20–200, 20–1000 and 25–1000 μgmL−1 for Cinn, Pira, and Hept respectively. The proposed method was validated for linearity, accuracy, repeatability, intermediate precision, and robustness via statistical analysis of the data. The method was shown to be precise, accurate, reproducible, sensitive, and selective for the analysis of Cinn/Pira and Cinn/Hept in laboratory prepared mixtures and in pharmaceutical formulations. PMID:24137049
Robust control of a parallel hybrid drivetrain with a CVT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, T.; Schroeder, D.
1996-09-01
In this paper the design of a robust control system for a parallel hybrid drivetrain is presented. The drivetrain is based on a continuously variable transmission (CVT) and is therefore a highly nonlinear multiple-input-multiple-output system (MIMO-System). Input-Output-Linearization offers the possibility of linearizing and of decoupling the system. Since for example the vehicle mass varies with the load and the efficiency of the gearbox depends strongly on the actual working point, an exact linearization of the plant will mostly fail. Therefore a robust control algorithm based on sliding mode is used to control the drivetrain.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
Du, Yiping P; Jin, Zhaoyang
2009-10-01
To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.
Sliding-mode control combined with improved adaptive feedforward for wafer scanner
NASA Astrophysics Data System (ADS)
Li, Xiaojie; Wang, Yiguang
2018-03-01
In this paper, a sliding-mode control method combined with improved adaptive feedforward is proposed for wafer scanner to improve the tracking performance of the closed-loop system. Particularly, In addition to the inverse model, the nonlinear force ripple effect which may degrade the tracking accuracy of permanent magnet linear motor (PMLM) is considered in the proposed method. The dominant position periodicity of force ripple is determined by using the Fast Fourier Transform (FFT) analysis for experimental data and the improved feedforward control is achieved by the online recursive least-squares (RLS) estimation of the inverse model and the force ripple. The improved adaptive feedforward is given in a general form of nth-order model with force ripple effect. This proposed method is motivated by the motion controller design of the long-stroke PMLM and short-stroke voice coil motor for wafer scanner. The stability of the closed-loop control system and the convergence of the motion tracking are guaranteed by the proposed sliding-mode feedback and adaptive feedforward methods theoretically. Comparative experiments on a precision linear motion platform can verify the correctness and effectiveness of the proposed method. The experimental results show that comparing to traditional method the proposed one has better performance of rapidity and robustness, especially for high speed motion trajectory. And, the improvements on both tracking accuracy and settling time can be achieved.
Vongsak, Boonyadist; Sithisarn, Pongtip; Gritsanapan, Wandee
2013-01-01
Moringa oleifera Lamarck (Moringaceae) is used as a multipurpose medicinal plant for the treatment of various diseases. Isoquercetin, astragalin, and crypto-chlorogenic acid have been previously found to be major active components in the leaves of this plant. In this study, a thin-layer-chromatography (TLC-)densitometric method was developed and validated for simultaneous quantification of these major components in the 70% ethanolic extracts of M. oleifera leaves collected from 12 locations. The average amounts of crypto-chlorogenic acid, isoquercetin, and astragalin were found to be 0.0473, 0.0427, and 0.0534% dry weight, respectively. The method was validated for linearity, precision, accuracy, limit of detection, limit of quantitation, and robustness. The linearity was obtained in the range of 100–500 ng/spot with a correlation coefficient (r) over 0.9961. Intraday and interday precisions demonstrated relative standard deviations of less than 5%. The accuracy of the method was confirmed by determining the recovery. The average recoveries of each component from the extracts were in the range of 98.28 to 99.65%. Additionally, the leaves from Chiang Mai province contained the highest amounts of all active components. The proposed TLC-densitometric method was simple, accurate, precise, and cost-effective for routine quality controlling of M. oleifera leaf extracts. PMID:23533530
Beck, William; Kabiche, Sofiane; Balde, Issa-Bella; Carret, Sandra; Fontan, Jean-Eudes; Cisternino, Salvatore; Schlatter, Joël
2016-12-01
To assess the stability of pharmaceutical suxamethonium (succinylcholine) solution for injection by validated stability-indicating chromatographic method in vials stored at room temperature. The chromatographic assay was achieved by using a detector wavelength set at 218 nm, a C18 column, and an isocratic mobile phase (100% of water) at a flow rate of 0.6 mL/min for 5 minutes. The method was validated according to the International Conference on Harmonization guidelines with respect to the stability-indicating capacity of the method including linearity, limits of detection and quantitation, precision, accuracy, system suitability, robustness, and forced degradations. Linearity was achieved in the concentration range of 5 to 40 mg/mL with a correlation coefficient higher than 0.999. The limits of detection and quantification were 0.8 and 0.9 mg/mL, respectively. The percentage relative standard deviation for intraday (1.3-1.7) and interday (0.1-2.0) precision was found to be less than 2.1%. Accuracy was assessed by the recovery test of suxamethonium from solution for injection (99.5%-101.2%). Storage of suxamethonium solution for injection vials at ambient temperature (22°C-26°C) for 17 days demonstrated that at least 95% of original suxamethonium concentration remained stable. Copyright © 2016 Elsevier Inc. All rights reserved.
Time-delay control of a magnetic levitated linear positioning system
NASA Technical Reports Server (NTRS)
Tarn, J. H.; Juang, K. Y.; Lin, C. E.
1994-01-01
In this paper, a high accuracy linear positioning system with a linear force actuator and magnetic levitation is proposed. By locating a permanently magnetized rod inside a current-carrying solenoid, the axial force is achieved by the boundary effect of magnet poles and utilized to power the linear motion, while the force for levitation is governed by Ampere's Law supplied with the same solenoid. With the levitation in a radial direction, there is hardly any friction between the rod and the solenoid. The high speed motion can hence be achieved. Besides, the axial force acting on the rod is a smooth function of rod position, so the system can provide nanometer resolution linear positioning to the molecule size. Since the force-position relation is highly nonlinear, and the mathematical model is derived according to some assumptions, such as the equivalent solenoid of the permanently magnetized rod, so there exists unknown dynamics in practical application. Thus 'robustness' is an important issue in controller design. Meanwhile the load effect reacts directly on the servo system without transmission elements, so the capability of 'disturbance rejection; is also required. With the above consideration, a time-delay control scheme is chosen and applied. By comparing the input-output relation and the mathematical model, the time-delay controller calculates an estimation of unmodeled dynamics and disturbances and then composes the desired compensation into the system. Effectiveness of the linear positioning system and control scheme are illustrated with simulation results.
Azougagh, M; Elkarbane, M; Bakhous, K; Issmaili, S; Skalli, A; Iben Moussad, S; Benaji, B
2016-09-01
An innovative simple, fast, precise and accurate ultra-high performance liquid chromatography (UPLC) method was developed for the determination of diclofenac (Dic) along with its impurities including the new dimer impurity in various pharmaceutical dosage forms. An Acquity HSS T3 (C18, 100×2.1mm, 1.8μm) column in gradient mode was used with mobile phase comprising of phosphoric acid, which has a pH value of 2.3 and methanol. The flow rate and the injection volume were set at 0.35ml·min(-1) and 1μl, respectively, and the UV detection was carried out at 254nm by using photodiode array detector. Dic was subjected to stress conditions from acid, base, hydrolytic, thermal, oxidative and photolytic degradation. The new developed method was successfully validated in accordance to the International Conference on Harmonization (ICH) guidelines with respect to specificity, limit of detection, limit of quantitation, precision, linearity, accuracy and robustness. The degradation products were well resolved from main peak and its seven impurities, proving the specificity power of the method. The method showed good linearity with consistent recoveries for Dic content and its impurities. The relative percentage of standard deviation obtained for the repeatability and intermediate precision experiments was less than 3% and LOQ was less than 0.5μg·ml(-1) for all compounds. The new proposed method was found to be accurate, precise, specific, linear and robust. In addition, the method was successfully applied for the assay determination of Dic and its impurities in the several pharmaceutical dosage forms. Copyright © 2016 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
Kulikov, A U; Zinchenko, A A
2007-02-19
This paper describes the validation of an isocratic HPLC method for the assay of dexpanthenol in aerosol and gel. The method employs the Vydac Proteins C4 column with a mobile phase of aqueous solution of trifluoroacetic acid and UV detection at 206 nm. A linear response (r>0.9999) was observed in the range of 13.0-130 microg mL(-1). The method shows good recoveries and intra and inter-day relative standard deviations were less than 1.0%. Validation parameters as specificity, accuracy and robustness were also determined. The method can be used for dexpanthenol assay of panthenol aerosol and gel with dexpanthenol as the method separates dexpanthenol from aerosol or gel excipients.
Nonlinear Acoustic and Ultrasonic NDT of Aeronautical Components
NASA Astrophysics Data System (ADS)
Van Den Abeele, Koen; Katkowski, Tomasz; Mattei, Christophe
2006-05-01
In response to the demand for innovative microdamage inspection systems, with high sensitivity and undoubted accuracy, we are currently investigating the use and robustness of several acoustic and ultrasonic NDT techniques based on Nonlinear Elastic Wave Spectroscopy (NEWS) for the characterization of microdamage in aeronautical components. In this report, we illustrate the results of an amplitude dependent analysis of the resonance behaviour, both in time (signal reverberation) and in frequency (sweep) domain. The technique is applied to intact and damaged samples of Carbon Fiber Reinforced Plastics (CFRP) composites after thermal loading or mechanical fatigue. The method shows a considerable gain in sensitivity and an incontestable interpretation of the results for nonlinear signatures in comparison with the linear characteristics. For highly fatigued samples, slow dynamical effects are observed.
Otero, Raquel; Carrera, Guillem; Dulsat, Joan Francesc; Fábregas, José Luís; Claramunt, Juan
2004-11-19
A static headspace (HS) gas chromatographic method for quantitative determination of residual solvents in a drug substance has been developed according to European Pharmacopoeia general procedure. A water-dimethylformamide mixture is proposed as sample solvent to obtain good sensitivity and recovery. The standard addition technique with internal standard quantitation was used for ethanol, tetrahydrofuran and toluene determination. Validation was performed within the requirements of ICH validation guidelines Q2A and Q2B. Selectivity was tested for 36 solvents, and system suitability requirements described in the European Pharmacopoeia were checked. Limits of detection and quantitation, precision, linearity, accuracy, intermediate precision and robustness were determined, and excellent results were obtained.
Multiple object tracking using the shortest path faster association algorithm.
Xi, Zhenghao; Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.
An improved feature extraction algorithm based on KAZE for multi-spectral image
NASA Astrophysics Data System (ADS)
Yang, Jianping; Li, Jun
2018-02-01
Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.
Multiple Object Tracking Using the Shortest Path Faster Association Algorithm
Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time. PMID:25215322
Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation.
Zana, F; Klein, J C
2001-01-01
This paper presents an algorithm based on mathematical morphology and curvature evaluation for the detection of vessel-like patterns in a noisy environment. Such patterns are very common in medical images. Vessel detection is interesting for the computation of parameters related to blood flow. Its tree-like geometry makes it a usable feature for registration between images that can be of a different nature. In order to define vessel-like patterns, segmentation is performed with respect to a precise model. We define a vessel as a bright pattern, piece-wise connected, and locally linear, mathematical morphology is very well adapted to this description, however other patterns fit such a morphological description. In order to differentiate vessels from analogous background patterns, a cross-curvature evaluation is performed. They are separated out as they have a specific Gaussian-like profile whose curvature varies smoothly along the vessel. The detection algorithm that derives directly from this modeling is based on four steps: (1) noise reduction; (2) linear pattern with Gaussian-like profile improvement; (3) cross-curvature evaluation; (4) linear filtering. We present its theoretical background and illustrate it on real images of various natures, then evaluate its robustness and its accuracy with respect to noise.
Linear, multivariable robust control with a mu perspective
NASA Technical Reports Server (NTRS)
Packard, Andy; Doyle, John; Balas, Gary
1993-01-01
The structured singular value is a linear algebra tool developed to study a particular class of matrix perturbation problems arising in robust feedback control of multivariable systems. These perturbations are called linear fractional, and are a natural way to model many types of uncertainty in linear systems, including state-space parameter uncertainty, multiplicative and additive unmodeled dynamics uncertainty, and coprime factor and gap metric uncertainty. The structured singular value theory provides a natural extension of classical SISO robustness measures and concepts to MIMO systems. The structured singular value analysis, coupled with approximate synthesis methods, make it possible to study the tradeoff between performance and uncertainty that occurs in all feedback systems. In MIMO systems, the complexity of the spatial interactions in the loop gains make it difficult to heuristically quantify the tradeoffs that must occur. This paper examines the role played by the structured singular value (and its computable bounds) in answering these questions, as well as its role in the general robust, multivariable control analysis and design problem.
Esteki, M; Nouroozi, S; Shahsavari, Z
2016-02-01
To develop a simple and efficient spectrophotometric technique combined with chemometrics for the simultaneous determination of methyl paraben (MP) and hydroquinone (HQ) in cosmetic products, and specifically, to: (i) evaluate the potential use of successive projections algorithm (SPA) to derivative spectrophotometric data in order to provide sufficient accuracy and model robustness and (ii) determine MP and HQ concentration in cosmetics without tedious pre-treatments such as derivatization or extraction techniques which are time-consuming and require hazardous solvents. The absorption spectra were measured in the wavelength range of 200-350 nm. Prior to performing chemometric models, the original and first-derivative absorption spectra of binary mixtures were used as calibration matrices. Variable selected by successive projections algorithm was used to obtain multiple linear regression (MLR) models based on a small subset of wavelengths. The number of wavelengths and the starting vector were optimized, and the comparison of the root mean square error of calibration (RMSEC) and cross-validation (RMSECV) was applied to select effective wavelengths with the least collinearity and redundancy. Principal component regression (PCR) and partial least squares (PLS) were also developed for comparison. The concentrations of the calibration matrix ranged from 0.1 to 20 μg mL(-1) for MP, and from 0.1 to 25 μg mL(-1) for HQ. The constructed models were tested on an external validation data set and finally cosmetic samples. The results indicated that successive projections algorithm-multiple linear regression (SPA-MLR), applied on the first-derivative spectra, achieved the optimal performance for two compounds when compared with the full-spectrum PCR and PLS. The root mean square error of prediction (RMSEP) was 0.083, 0.314 for MP and HQ, respectively. To verify the accuracy of the proposed method, a recovery study on real cosmetic samples was carried out with satisfactory results (84-112%). The proposed method, which is an environmentally friendly approach, using minimum amount of solvent, is a simple, fast and low-cost analysis method that can provide high accuracy and robust models. The suggested method does not need any complex extraction procedure which is time-consuming and requires hazardous solvents. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Tail mean and related robust solution concepts
NASA Astrophysics Data System (ADS)
Ogryczak, Włodzimierz
2014-01-01
Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.
Optimized swimmer tracking system based on a novel multi-related-targets approach
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2017-02-01
Robust tracking is a crucial step in automatic swimmer evaluation from video sequences. We designed a robust swimmer tracking system using a new multi-related-targets approach. The main idea is to consider the swimmer as a bloc of connected subtargets that advance at the same speed. If one of the subtargets is partially or totally occluded, it can be localized by knowing the position of the others. In this paper, we first introduce the two-dimensional direct linear transformation technique that we used to calibrate the videos. Then, we present the classical tracking approach based on dynamic fusion. Next, we highlight the main contribution of our work, which is the multi-related-targets tracking approach. This approach, the classical head-only approach and the ground truth are then compared, through testing on a database of high-level swimmers in training, national and international competitions (French National Championships, Limoges 2015, and World Championships, Kazan 2015). Tracking percentage and the accuracy of the instantaneous speed are evaluated and the findings show that our new appraoach is significantly more accurate than the classical approach.
Abd El-Hay, Soad S; Hashem, Hisham; Gouda, Ayman A
2016-03-01
A novel, simple and robust high-performance liquid chromatography (HPLC) method was developed and validated for simultaneous determination of xipamide (XIP), triamterene (TRI) and hydrochlorothiazide (HCT) in their bulk powders and dosage forms. Chromatographic separation was carried out in less than two minutes. The separation was performed on a RP C-18 stationary phase with an isocratic elution system consisting of 0.03 mol L(-1) orthophosphoric acid (pH 2.3) and acetonitrile (ACN) as the mobile phase in the ratio of 50:50, at 2.0 mL min(-1) flow rate at room temperature. Detection was performed at 220 nm. Validation was performed concerning system suitability, limits of detection and quantitation, accuracy, precision, linearity and robustness. Calibration curves were rectilinear over the range of 0.195-100 μg mL(-1) for all the drugs studied. Recovery values were 99.9, 99.6 and 99.0 % for XIP, TRI and HCT, respectively. The method was applied to simultaneous determination of the studied analytes in their pharmaceutical dosage forms.
Robust head pose estimation via supervised manifold learning.
Wang, Chao; Song, Xubo
2014-05-01
Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Lyu, Weiwei
2017-01-01
Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method. PMID:29182592
Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul
2014-09-01
This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projectedmore » on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers and advanced processor ar- chitectures. Finally, we briefly describe the MSM method for efficient calculation of electrostatic interactions on massively parallel computers.« less
Detection of epileptic seizure in EEG signals using linear least squares preprocessing.
Roshan Zamir, Z
2016-09-01
An epileptic seizure is a transient event of abnormal excessive neuronal discharge in the brain. This unwanted event can be obstructed by detection of electrical changes in the brain that happen before the seizure takes place. The automatic detection of seizures is necessary since the visual screening of EEG recordings is a time consuming task and requires experts to improve the diagnosis. Much of the prior research in detection of seizures has been developed based on artificial neural network, genetic programming, and wavelet transforms. Although the highest achieved accuracy for classification is 100%, there are drawbacks, such as the existence of unbalanced datasets and the lack of investigations in performances consistency. To address these, four linear least squares-based preprocessing models are proposed to extract key features of an EEG signal in order to detect seizures. The first two models are newly developed. The original signal (EEG) is approximated by a sinusoidal curve. Its amplitude is formed by a polynomial function and compared with the predeveloped spline function. Different statistical measures, namely classification accuracy, true positive and negative rates, false positive and negative rates and precision, are utilised to assess the performance of the proposed models. These metrics are derived from confusion matrices obtained from classifiers. Different classifiers are used over the original dataset and the set of extracted features. The proposed models significantly reduce the dimension of the classification problem and the computational time while the classification accuracy is improved in most cases. The first and third models are promising feature extraction methods with the classification accuracy of 100%. Logistic, LazyIB1, LazyIB5, and J48 are the best classifiers. Their true positive and negative rates are 1 while false positive and negative rates are 0 and the corresponding precision values are 1. Numerical results suggest that these models are robust and efficient for detecting epileptic seizure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A
2016-03-01
In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/. © 2016 Wiley Periodicals, Inc.
A hybrid robust fault tolerant control based on adaptive joint unscented Kalman filter.
Shabbouei Hagh, Yashar; Mohammadi Asl, Reza; Cocquempot, Vincent
2017-01-01
In this paper, a new hybrid robust fault tolerant control scheme is proposed. A robust H ∞ control law is used in non-faulty situation, while a Non-Singular Terminal Sliding Mode (NTSM) controller is activated as soon as an actuator fault is detected. Since a linear robust controller is designed, the system is first linearized through the feedback linearization method. To switch from one controller to the other, a fuzzy based switching system is used. An Adaptive Joint Unscented Kalman Filter (AJUKF) is used for fault detection and diagnosis. The proposed method is based on the simultaneous estimation of the system states and parameters. In order to show the efficiency of the proposed scheme, a simulated 3-DOF robotic manipulator is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.
Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang
2009-01-01
This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.
NASA Astrophysics Data System (ADS)
Wei, Hai-Rui; Liu, Ji-Zhen
2017-02-01
It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch-Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Hai-Rui, E-mail: hrwei@ustb.edu.cn; Liu, Ji-Zhen
2017-02-15
It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch–Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.
Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.
Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin
2017-04-01
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l 1 -, l 2 -, l ∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.
Nonlinear Aeroacoustics Computations by the Space-Time CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.
2003-01-01
The Space-Time Conservation Element and Solution Element Method, or CE/SE Method for short, is a recently developed numerical method for conservation laws. Despite its second order accuracy in space and time, it possesses low dispersion errors and low dissipation. The method is robust enough to cover a wide range of compressible flows: from weak linear acoustic waves to strong discontinuous waves (shocks). An outstanding feature of the CE/SE scheme is its truly multi-dimensional, simple but effective non-reflecting boundary condition (NRBC), which is particularly valuable for computational aeroacoustics (CAA). In nature, the method may be categorized as a finite volume method, where the conservation element (CE) is equivalent to a finite control volume (or cell) and the solution element (SE) can be understood as the cell interface. However, due to its careful treatment of the surface fluxes and geometry, it is different from the existing schemes. Currently, the CE/SE scheme has been developed to a matured stage that a 3-D unstructured CE/SE Navier-Stokes solver is already available. However, in the present review paper, as a general introduction to the CE/SE method, only the 2-D unstructured Euler CE/SE solver is chosen and sketched in section 2. Then applications of the 2-D and 3-D CE/SE schemes to linear, and in particular, nonlinear aeroacoustics are depicted in sections 3, 4, and 5 to demonstrate its robustness and capability.
Robust Decision-making Applied to Model Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemez, Francois M.
2012-08-06
The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define eachmore » of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.« less
Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.
Liu, Hua; Wu, Wen
2017-03-31
Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states' error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF's strong robustness and SSRCKF's high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking.
Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking
Liu, Hua; Wu, Wen
2017-01-01
Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states’ error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF’s strong robustness and SSRCKF’s high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking. PMID:28362347
A novel finite volume discretization method for advection-diffusion systems on stretched meshes
NASA Astrophysics Data System (ADS)
Merrick, D. G.; Malan, A. G.; van Rooyen, J. A.
2018-06-01
This work is concerned with spatial advection and diffusion discretization technology within the field of Computational Fluid Dynamics (CFD). In this context, a novel method is proposed, which is dubbed the Enhanced Taylor Advection-Diffusion (ETAD) scheme. The model equation employed for design of the scheme is the scalar advection-diffusion equation, the industrial application being incompressible laminar and turbulent flow. Developed to be implementable into finite volume codes, ETAD places specific emphasis on improving accuracy on stretched structured and unstructured meshes while considering both advection and diffusion aspects in a holistic manner. A vertex-centered structured and unstructured finite volume scheme is used, and only data available on either side of the volume face is employed. This includes the addition of a so-called mesh stretching metric. Additionally, non-linear blending with the existing NVSF scheme was performed in the interest of robustness and stability, particularly on equispaced meshes. The developed scheme is assessed in terms of accuracy - this is done analytically and numerically, via comparison to upwind methods which include the popular QUICK and CUI techniques. Numerical tests involved the 1D scalar advection-diffusion equation, a 2D lid driven cavity and turbulent flow case. Significant improvements in accuracy were achieved, with L2 error reductions of up to 75%.
Kallehauge, Jesper F; Sourbron, Steven; Irving, Benjamin; Tanderup, Kari; Schnabel, Julia A; Chappell, Michael A
2017-06-01
Fitting tracer kinetic models using linear methods is much faster than using their nonlinear counterparts, although this comes often at the expense of reduced accuracy and precision. The aim of this study was to derive and compare the performance of the linear compartmental tissue uptake (CTU) model with its nonlinear version with respect to their percentage error and precision. The linear and nonlinear CTU models were initially compared using simulations with varying noise and temporal sampling. Subsequently, the clinical applicability of the linear model was demonstrated on 14 patients with locally advanced cervical cancer examined with dynamic contrast-enhanced magnetic resonance imaging. Simulations revealed equal percentage error and precision when noise was within clinical achievable ranges (contrast-to-noise ratio >10). The linear method was significantly faster than the nonlinear method, with a minimum speedup of around 230 across all tested sampling rates. Clinical analysis revealed that parameters estimated using the linear and nonlinear CTU model were highly correlated (ρ ≥ 0.95). The linear CTU model is computationally more efficient and more stable against temporal downsampling, whereas the nonlinear method is more robust to variations in noise. The two methods may be used interchangeably within clinical achievable ranges of temporal sampling and noise. Magn Reson Med 77:2414-2423, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Brekhna, Brekhna; Mahmood, Arif; Zhou, Yuanfeng; Zhang, Caiming
2017-11-01
Superpixels have gradually become popular in computer vision and image processing applications. However, no comprehensive study has been performed to evaluate the robustness of superpixel algorithms in regard to common forms of noise in natural images. We evaluated the robustness of 11 recently proposed algorithms to different types of noise. The images were corrupted with various degrees of Gaussian blur, additive white Gaussian noise, and impulse noise that either made the object boundaries weak or added extra information to it. We performed a robustness analysis of simple linear iterative clustering (SLIC), Voronoi Cells (VCells), flooding-based superpixel generation (FCCS), bilateral geodesic distance (Bilateral-G), superpixel via geodesic distance (SSS-G), manifold SLIC (M-SLIC), Turbopixels, superpixels extracted via energy-driven sampling (SEEDS), lazy random walk (LRW), real-time superpixel segmentation by DBSCAN clustering, and video supervoxels using partially absorbing random walks (PARW) algorithms. The evaluation process was carried out both qualitatively and quantitatively. For quantitative performance comparison, we used achievable segmentation accuracy (ASA), compactness, under-segmentation error (USE), and boundary recall (BR) on the Berkeley image database. The results demonstrated that all algorithms suffered performance degradation due to noise. For Gaussian blur, Bilateral-G exhibited optimal results for ASA and USE measures, SLIC yielded optimal compactness, whereas FCCS and DBSCAN remained optimal for BR. For the case of additive Gaussian and impulse noises, FCCS exhibited optimal results for ASA, USE, and BR, whereas Bilateral-G remained a close competitor in ASA and USE for Gaussian noise only. Additionally, Turbopixel demonstrated optimal performance for compactness for both types of noise. Thus, no single algorithm was able to yield optimal results for all three types of noise across all performance measures. Conclusively, to solve real-world problems effectively, more robust superpixel algorithms must be developed.
Robust sky light polarization detection with an S-wave plate in a light field camera.
Zhang, Wenjing; Zhang, Xuanzhe; Cao, Yu; Liu, Haibo; Liu, Zejin
2016-05-01
The sky light polarization navigator has many advantages, such as low cost, no decrease in accuracy with continuous operation, etc. However, current celestial polarization measurement methods often suffer from low performance when the sky is covered by clouds, which reduce the accuracy of navigation. In this paper we introduce a new method and structure based on a handheld light field camera and a radial polarizer, composed of an S-wave plate and a linear polarizer, to detect the sky light polarization pattern across a wide field of view in a single snapshot. Each micro-subimage has a special intensity distribution. After extracting the texture feature of these subimages, stable distribution information of the angle of polarization under a cloudy sky can be obtained. Our experimental results match well with the predicted properties of the theory. Because the polarization pattern is obtained through image processing, rather than traditional methods based on mathematical computation, this method is less sensitive to errors of pixel gray value and thus has better anti-interference performance.
iXora: exact haplotype inferencing and trait association.
Utro, Filippo; Haiminen, Niina; Livingstone, Donald; Cornejo, Omar E; Royaert, Stefan; Schnell, Raymond J; Motamayor, Juan Carlos; Kuhn, David N; Parida, Laxmi
2013-06-06
We address the task of extracting accurate haplotypes from genotype data of individuals of large F1 populations for mapping studies. While methods for inferring parental haplotype assignments on large F1 populations exist in theory, these approaches do not work in practice at high levels of accuracy. We have designed iXora (Identifying crossovers and recombining alleles), a robust method for extracting reliable haplotypes of a mapping population, as well as parental haplotypes, that runs in linear time. Each allele in the progeny is assigned not just to a parent, but more precisely to a haplotype inherited from the parent. iXora shows an improvement of at least 15% in accuracy over similar systems in literature. Furthermore, iXora provides an easy-to-use, comprehensive environment for association studies and hypothesis checking in populations of related individuals. iXora provides detailed resolution in parental inheritance, along with the capability of handling very large populations, which allows for accurate haplotype extraction and trait association. iXora is available for non-commercial use from http://researcher.ibm.com/project/3430.
Discrimination Enhancement with Transient Feature Analysis of a Graphene Chemical Sensor.
Nallon, Eric C; Schnee, Vincent P; Bright, Collin J; Polcha, Michael P; Li, Qiliang
2016-01-19
A graphene chemical sensor is subjected to a set of structurally and chemically similar hydrocarbon compounds consisting of toluene, o-xylene, p-xylene, and mesitylene. The fractional change in resistance of the sensor upon exposure to these compounds exhibits a similar response magnitude among compounds, whereas large variation is observed within repetitions for each compound, causing a response overlap. Therefore, traditional features depending on maximum response change will cause confusion during further discrimination and classification analysis. More robust features that are less sensitive to concentration, sampling, and drift variability would provide higher quality information. In this work, we have explored the advantage of using transient-based exponential fitting coefficients to enhance the discrimination of similar compounds. The advantages of such feature analysis to discriminate each compound is evaluated using principle component analysis (PCA). In addition, machine learning-based classification algorithms were used to compare the prediction accuracies when using fitting coefficients as features. The additional features greatly enhanced the discrimination between compounds while performing PCA and also improved the prediction accuracy by 34% when using linear discrimination analysis.
Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.
2009-01-01
In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083
PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.
Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude
2015-04-27
Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.
Epoch-based Entropy for Early Screening of Alzheimer's Disease.
Houmani, N; Dreyfus, G; Vialatte, F B
2015-12-01
In this paper, we introduce a novel entropy measure, termed epoch-based entropy. This measure quantifies disorder of EEG signals both at the time level and spatial level, using local density estimation by a Hidden Markov Model on inter-channel stationary epochs. The investigation is led on a multi-centric EEG database recorded from patients at an early stage of Alzheimer's disease (AD) and age-matched healthy subjects. We investigate the classification performances of this method, its robustness to noise, and its sensitivity to sampling frequency and to variations of hyperparameters. The measure is compared to two alternative complexity measures, Shannon's entropy and correlation dimension. The classification accuracies for the discrimination of AD patients from healthy subjects were estimated using a linear classifier designed on a development dataset, and subsequently tested on an independent test set. Epoch-based entropy reached a classification accuracy of 83% on the test dataset (specificity = 83.3%, sensitivity = 82.3%), outperforming the two other complexity measures. Furthermore, it was shown to be more stable to hyperparameter variations, and less sensitive to noise and sampling frequency disturbances than the other two complexity measures.
Robustness Analysis of Integrated LPV-FDI Filters and LTI-FTC System for a Transport Aircraft
NASA Technical Reports Server (NTRS)
Khong, Thuan H.; Shin, Jong-Yeob
2007-01-01
This paper proposes an analysis framework for robustness analysis of a nonlinear dynamics system that can be represented by a polynomial linear parameter varying (PLPV) system with constant bounded uncertainty. The proposed analysis framework contains three key tools: 1) a function substitution method which can convert a nonlinear system in polynomial form into a PLPV system, 2) a matrix-based linear fractional transformation (LFT) modeling approach, which can convert a PLPV system into an LFT system with the delta block that includes key uncertainty and scheduling parameters, 3) micro-analysis, which is a well known robust analysis tool for linear systems. The proposed analysis framework is applied to evaluating the performance of the LPV-fault detection and isolation (FDI) filters of the closed-loop system of a transport aircraft in the presence of unmodeled actuator dynamics and sensor gain uncertainty. The robustness analysis results are compared with nonlinear time simulations.
Robust control for uncertain structures
NASA Technical Reports Server (NTRS)
Douglas, Joel; Athans, Michael
1991-01-01
Viewgraphs on robust control for uncertain structures are presented. Topics covered include: robust linear quadratic regulator (RLQR) formulas; mismatched LQR design; RLQR design; interpretations of RLQR design; disturbance rejection; and performance comparisons: RLQR vs. mismatched LQR.
Robust linear discriminant analysis with distance based estimators
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina
2017-11-01
Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.
Practical robustness measures in multivariable control system analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Lehtomaki, N. A.
1981-01-01
The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.
A robust optimization methodology for preliminary aircraft design
NASA Astrophysics Data System (ADS)
Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.
2016-05-01
This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.
Novel AC Servo Rotating and Linear Composite Driving Device for Plastic Forming Equipment
NASA Astrophysics Data System (ADS)
Liang, Jin-Tao; Zhao, Sheng-Dun; Li, Yong-Yi; Zhu, Mu-Zhi
2017-07-01
The existing plastic forming equipment are mostly driven by traditional AC motors with long transmission chains, low efficiency, large size, low precision and poor dynamic response are the common disadvantages. In order to realize high performance forming processes, the driving device should be improved, especially for complicated processing motions. Based on electric servo direct drive technology, a novel AC servo rotating and linear composite driving device is proposed, which features implementing both spindle rotation and feed motion without transmission, so that compact structure and precise control can be achieved. Flux switching topology is employed in the rotating drive component for strong robustness, and fractional slot is employed in the linear direct drive component for large force capability. Then the mechanical structure for compositing rotation and linear motion is designed. A device prototype is manufactured, machining of each component and the whole assembly are presented respectively. Commercial servo amplifiers are utilized to construct the control system of the proposed device. To validate the effectiveness of the proposed composite driving device, experimental study on the dynamic test benches are conducted. The results indicate that the output torque can attain to 420 N·m and the dynamic tracking errors are less than about 0.3 rad in the rotating drive. the dynamic tracking errors are less than about 1.6 mm in the linear feed. The proposed research provides a method to construct high efficiency and accuracy direct driving device in plastic forming equipment.
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers
Thompson, Clarissa A.; Opfer, John E.
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers.
Thompson, Clarissa A; Opfer, John E
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children's representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.
Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin
2018-06-01
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.
Development of an integrated sub-picometric SWIFTS-based wavelength meter
NASA Astrophysics Data System (ADS)
Duchemin, Céline; Thomas, Fabrice; Martin, Bruno; Morino, Eric; Puget, Renaud; Oliveres, Robin; Bonneville, Christophe; Gonthiez, Thierry; Valognes, Nicolas
2017-02-01
SWIFTSTM technology has been known for over five years to offer compact and high-resolution laser spectrum analyzers. The increase of wavelength monitoring demand with even better accuracy and resolution has pushed the development of a wavelength meter based on SWIFTSTM technology, named LW-10. As a reminder, SWIFTSTM principle consists in a waveguide in which a stationary wave is created, sampled and read out by a linear image sensor array. Due to its inherent properties (non-uniform subsampling) and aliasing signal (as presented in Shannon-Nyquist criterion), the system offers short spectral window bandwidths thus needs an a priori on the working wavelength and thermal monitoring. Although SWIFTSTM-based devices are barely sensitive to atmospheric pressure, temperature control is a key factor to master both high accuracy and wavelength meter resolution. Temperature control went from passive (temperature probing only) to active control (Peltier thermoelectric cooler) with milli-degree accuracy. The software part consists in dropping the Fourier-like transform, for a least-squares method directly on the interference pattern. Moreover, the consideration of the system's chromatic behavior provides a "signature" for automated wavelength detection and discrimination. This SWIFTSTM-based new device - LW-10 - shows outstanding results in terms of absolute accuracy, wavelength meter resolution as well as calibration robustness within a compact device, compared to other existing technologies. On the 630 - 1100 nm range, the final device configuration allows pulsed or CW lasers monitoring with 20 MHz resolution and 200 MHz absolute accuracy. Non-exhaustive applications include tunable laser control and frequency locking experiments
Noise Robust Speech Recognition Applied to Voice-Driven Wheelchair
NASA Astrophysics Data System (ADS)
Sasou, Akira; Kojima, Hiroaki
2009-12-01
Conventional voice-driven wheelchairs usually employ headset microphones that are capable of achieving sufficient recognition accuracy, even in the presence of surrounding noise. However, such interfaces require users to wear sensors such as a headset microphone, which can be an impediment, especially for the hand disabled. Conversely, it is also well known that the speech recognition accuracy drastically degrades when the microphone is placed far from the user. In this paper, we develop a noise robust speech recognition system for a voice-driven wheelchair. This system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors. We verified the effectiveness of our system in experiments in different environments, and confirmed that our system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors.
Bernardy, Jeffry A.; Hubert, Terrance D.; Ogorek, Jacob M.; Schmidt, Larry J.
2013-01-01
An LC/MS method was developed and validated for the quantitative determination and confirmation of antimycin-A (ANT-A) in water from lakes or streams. Three different water sample volumes (25, 50, and 250 mL) were evaluated. ANT-A was stabilized in the field by immediately extracting it from water into anhydrous acetone using SPE. The stabilized concentrated samples were then transported to a laboratory and analyzed by LC/MS using negative electrospray ionization. The method was determined to have adequate accuracy (78 to 113% recovery), precision (0.77 to 7.5% RSD with samples ≥500 ng/L and 4.8 to 17% RSD with samples ≤100 ng/L), linearity, and robustness over an LOQ range from 8 to 51 600 ng/L.
Chang, Ching-Min; Lo, Yu-Lung; Tran, Nghia-Khanh; Chang, Yu-Jen
2018-03-20
A method is proposed for characterizing the optical properties of articular cartilage sliced from a pig's thighbone using a Stokes-Mueller polarimetry technique. The principal axis angle, phase retardance, optical rotation angle, circular diattenuation, diattenuation axis angle, linear diattenuation, and depolarization index properties of the cartilage sample are all decoupled in the proposed analytical model. Consequently, the accuracy and robustness of the extracted results are improved. The glucose concentration, collagen distribution, and scattering properties of samples from various depths of the articular cartilage are systematically explored via an inspection of the related parameters. The results show that the glucose concentration and scattering effect are both enhanced in the superficial region of the cartilage. By contrast, the collagen density increases with an increasing sample depth.
Zhang, Weihong; Howell, Steven C; Wright, David W; Heindel, Andrew; Qiu, Xiangyun; Chen, Jianhan; Curtis, Joseph E
2017-05-01
We describe a general method to use Monte Carlo simulation followed by torsion-angle molecular dynamics simulations to create ensembles of structures to model a wide variety of soft-matter biological systems. Our particular emphasis is focused on modeling low-resolution small-angle scattering and reflectivity structural data. We provide examples of this method applied to HIV-1 Gag protein and derived fragment proteins, TraI protein, linear B-DNA, a nucleosome core particle, and a glycosylated monoclonal antibody. This procedure will enable a large community of researchers to model low-resolution experimental data with greater accuracy by using robust physics based simulation and sampling methods which are a significant improvement over traditional methods used to interpret such data. Published by Elsevier Inc.
Cho, HyunGi; Yeon, Suyong; Choi, Hyunga; Doh, Nakju
2018-01-01
In a group of general geometric primitives, plane-based features are widely used for indoor localization because of their robustness against noises. However, a lack of linearly independent planes may lead to a non-trivial estimation. This in return can cause a degenerate state from which all states cannot be estimated. To solve this problem, this paper first proposed a degeneracy detection method. A compensation method that could fix orientations by projecting an inertial measurement unit’s (IMU) information was then explained. Experiments were conducted using an IMU-Kinect v2 integrated sensor system prone to fall into degenerate cases owing to its narrow field-of-view. Results showed that the proposed framework could enhance map accuracy by successful detection and compensation of degenerated orientations. PMID:29565287
Laser-Interferometric Broadband Seismometer for Epicenter Location Estimation
Lee, Kyunghyun; Kwon, Hyungkwan; You, Kwanho
2017-01-01
In this paper, we suggest a seismic signal measurement system that uses a laser interferometer. The heterodyne laser interferometer is used as a seismometer due to its high accuracy and robustness. Seismic data measured by the laser interferometer is used to analyze crucial earthquake characteristics. To measure P-S time more precisely, the short time Fourier transform and instantaneous frequency estimation methods are applied to the intensity signal (Iy) of the laser interferometer. To estimate the epicenter location, the range difference of arrival algorithm is applied with the P-S time result. The linear matrix equation of the epicenter localization can be derived using P-S time data obtained from more than three observatories. We prove the performance of the proposed algorithm through simulation and experimental results. PMID:29065515
Bernardy, Jeffry A; Hubert, Terrance D; Ogorek, Jacob M; Schmidt, Larry J
2013-01-01
An LC/MS method was developed and validated for the quantitative determination and confirmation of antimycin-A (ANT-A) in water from lakes or streams. Three different water sample volumes (25, 50, and 250 mL) were evaluated. ANT-A was stabilized in the field by immediately extracting it from water into anhydrous acetone using SPE. The stabilized concentrated samples were then transported to a laboratory and analyzed by LC/MS using negative electrospray ionization. The method was determined to have adequate accuracy (78 to 113% recovery), precision (0.77 to 7.5% RSD with samples > or = 500 ng/L and 4.8 to 17% RSD with samples < or = 100 ng/L), linearity, and robustness over an LOQ range from 8 to 51 600 ng/L.
Sarzotti-Kelsoe, Marcella; Bailer, Robert T; Turk, Ellen; Lin, Chen-li; Bilska, Miroslawa; Greene, Kelli M.; Gao, Hongmei; Todd, Christopher A.; Ozaki, Daniel A.; Seaman, Michael S.; Mascola, John R.; Montefiori, David C.
2014-01-01
The TZM-bl assay measures antibody-mediated neutralization of HIV-1 as a function of reductions in HIV-1 Tat-regulated firefly luciferase (Luc) reporter gene expression after a single round of infection with Env-pseudotyped viruses. This assay has become the main endpoint neutralization assay used for the assessment of preclinical and clinical trial samples by a growing number of laboratories worldwide. Here we present the results of the formal optimization and validation of the TZM-bl assay, performed in compliance with Good Clinical Laboratory Practice (GCLP) guidelines. The assay was evaluated for specificity, accuracy, precision, limits of detection and quantitation, linearity, range and robustness. The validated manual TZM-bl assay was also adapted, optimized and qualified to an automated 384-well format. PMID:24291345
Optimal estimation for the satellite attitude using star tracker measurements
NASA Technical Reports Server (NTRS)
Lo, J. T.-H.
1986-01-01
An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.
NASA Astrophysics Data System (ADS)
Gupta, Lokesh Kumar
2012-11-01
Seven process related impurities were identified by LC-MS in the atorvastatin calcium drug substance. These impurities were identified by LC-MS. The structure of impurities was confirmed by modern spectroscopic techniques like 1H NMR and IR and physicochemical studies conducted by using synthesized authentic reference compounds. The synthesized reference samples of the impurity compounds were used for the quantitative HPLC determination. These impurities were detected by newly developed gradient, reverse phase high performance liquid chromatographic (HPLC) method. The system suitability of HPLC analysis established the validity of the separation. The analytical method was validated according to International Conference of Harmonization (ICH) with respect to specificity, precision, accuracy, linearity, robustness and stability of analytical solutions to demonstrate the power of newly developed HPLC method.
Review of LFTs, LMIs, and mu. [Linear Fractional Transformations, Linear Matrix Inequalities
NASA Technical Reports Server (NTRS)
Doyle, John; Packard, Andy; Zhou, Kemin
1991-01-01
The authors present a tutorial overview of linear fractional transformations (LFTs) and the role of the structured singular value, mu, and linear matrix inequalities (LMIs) in solving LFT problems. The authors first introduce the notation for LFTs and briefly discuss some of their properties. They then describe mu and its connections with LFTs. They focus on two standard notions of robust stability and performance, mu stability and performance and Q stability and performance, and their relationship is discussed. Comparisons with the L1 theory of robust performance with structured uncertainty are considered.
Li, Zukui; Floudas, Christodoulos A.
2012-01-01
Probabilistic guarantees on constraint satisfaction for robust counterpart optimization are studied in this paper. The robust counterpart optimization formulations studied are derived from box, ellipsoidal, polyhedral, “interval+ellipsoidal” and “interval+polyhedral” uncertainty sets (Li, Z., Ding, R., and Floudas, C.A., A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear and Robust Mixed Integer Linear Optimization, Ind. Eng. Chem. Res, 2011, 50, 10567). For those robust counterpart optimization formulations, their corresponding probability bounds on constraint satisfaction are derived for different types of uncertainty characteristic (i.e., bounded or unbounded uncertainty, with or without detailed probability distribution information). The findings of this work extend the results in the literature and provide greater flexibility for robust optimization practitioners in choosing tighter probability bounds so as to find less conservative robust solutions. Extensive numerical studies are performed to compare the tightness of the different probability bounds and the conservatism of different robust counterpart optimization formulations. Guiding rules for the selection of robust counterpart optimization models and for the determination of the size of the uncertainty set are discussed. Applications in production planning and process scheduling problems are presented. PMID:23329868
NASA Astrophysics Data System (ADS)
Bukhari, Hassan J.
2017-12-01
In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.
Mirza, Tahseen; Liu, Qian Julie; Vivilecchia, Richard; Joshi, Yatindra
2009-03-01
There has been a growing interest during the past decade in the use of fiber optics dissolution testing. Use of this novel technology is mainly confined to research and development laboratories. It has not yet emerged as a tool for end product release testing despite its ability to generate in situ results and efficiency improvement. One potential reason may be the lack of clear validation guidelines that can be applied for the assessment of suitability of fiber optics. This article describes a comprehensive validation scheme and development of a reliable, robust, reproducible and cost-effective dissolution test using fiber optics technology. The test was successfully applied for characterizing the dissolution behavior of a 40-mg immediate-release tablet dosage form that is under development at Novartis Pharmaceuticals, East Hanover, New Jersey. The method was validated for the following parameters: linearity, precision, accuracy, specificity, and robustness. In particular, robustness was evaluated in terms of probe sampling depth and probe orientation. The in situ fiber optic method was found to be comparable to the existing manual sampling dissolution method. Finally, the fiber optic dissolution test was successfully performed by different operators on different days, to further enhance the validity of the method. The results demonstrate that the fiber optics technology can be successfully validated for end product dissolution/release testing. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association
A new look at the robust control of discrete-time Markov jump linear systems
NASA Astrophysics Data System (ADS)
Todorov, M. G.; Fragoso, M. D.
2016-03-01
In this paper, we make a foray in the role played by a set of four operators on the study of robust H2 and mixed H2/H∞ control problems for discrete-time Markov jump linear systems. These operators appear in the study of mean square stability for this class of systems. By means of new linear matrix inequality (LMI) characterisations of controllers, which include slack variables that, to some extent, separate the robustness and performance objectives, we introduce four alternative approaches to the design of controllers which are robustly stabilising and at the same time provide a guaranteed level of H2 performance. Since each operator provides a different degree of conservatism, the results are unified in the form of an iterative LMI technique for designing robust H2 controllers, whose convergence is attained in a finite number of steps. The method yields a new way of computing mixed H2/H∞ controllers, whose conservatism decreases with iteration. Two numerical examples illustrate the applicability of the proposed results for the control of a small unmanned aerial vehicle, and for an underactuated robotic arm.
El-Bagary, Ramzia I; Elkady, Ehab F; Farid, Naira A; Youssef, Nadia F
2017-03-05
Apixaban and Tirofiban Hydrochloride are low molecular weight anticoagulants. The two drugs exhibit native fluorescence that allow the development of simple and valid spectrofluorimetric methods for the determination of Apixaban at λ ex/λ em=284/450nm and tirofiban HCl at λ ex/λ em=227/300nm in aqueous media. Different experimental parameters affecting fluorescence intensities were carefully studied and optimized. The fluorescence intensity-concentration plots were linear over the ranges of 0.2-6μgml -1 for apixaban and 0.2-5μgml -1 for tirofiban HCl. The limits of detection were 0.017 and 0.019μgml -1 and quantification limits were 0.057 and 0.066μgml -1 for apixaban and tirofiban HCl, respectively. The fluorescence quantum yield of apixaban and tirofiban were calculated with values of 0.43 and 0.49. Method validation was evaluated for linearity, specificity, accuracy, precision and robustness as per ICH guidelines. The proposed spectrofluorimetric methods were successfully applied for the determination of apixaban in Eliquis tablets and tirofiban HCl in Aggrastat intravenous infusion. Tolerance ratio was tested to study the effect of foreign interferences from dosage forms excipients. Using Student's t and F tests, revealed no statistically difference between the developed spectrofluorimetric methods and the comparison methods regarding the accuracy and precision, so can be contributed to the analysis of apixaban and tirofiban HCl in QC laboratories as an alternative method. Copyright © 2016 Elsevier B.V. All rights reserved.
Pandya, Jui J; Sanyal, Mallika; Shrivastav, Pranav S
2017-09-01
A new, simple, accurate and precise high-performance thin-layer chromatographic method has been developed and validated for simultaneous determination of an anthelmintic drug, albendazole, and its active metabolite albendazole, sulfoxide. Planar chromatographic separation was performed on aluminum-backed layer of silica gel 60G F 254 using a mixture of toluene-acetonitrile-glacial acetic acid (7.0:2.9:0.1, v/v/v) as the mobile phase. For quantitation, the separated spots were scanned densitometrically at 225 nm. The retention factors (R f ) obtained under the established conditions were 0.76 ± 0.01 and 0.50 ± 0.01 and the regression plots were linear (r 2 ≥ 0.9997) in the concentration ranges 50-350 and 100-700 ng/band for albendazole and albendazole sulfoxide, respectively. The method was validated for linearity, specificity, accuracy (recovery) and precision, repeatability, stability and robustness. The limit of detection and limit of quantitation found were 9.84 and 29.81 ng/band for albendazole and 21.60 and 65.45 ng/band for albendazole sulfoxide, respectively. For plasma samples, solid-phase extraction of analytes yielded mean extraction recoveries of 87.59 and 87.13% for albendazole and albendazole sulfoxide, respectively. The method was successfully applied for the analysis of albendazole in pharmaceutical formulations with accuracy ≥99.32%. Copyright © 2017 John Wiley & Sons, Ltd.
Climate Cycles and Forecasts of Cutaneous Leishmaniasis, a Nonstationary Vector-Borne Disease
Chaves, Luis Fernando; Pascual, Mercedes
2006-01-01
Background Cutaneous leishmaniasis (CL) is one of the main emergent diseases in the Americas. As in other vector-transmitted diseases, its transmission is sensitive to the physical environment, but no study has addressed the nonstationary nature of such relationships or the interannual patterns of cycling of the disease. Methods and Findings We studied monthly data, spanning from 1991 to 2001, of CL incidence in Costa Rica using several approaches for nonstationary time series analysis in order to ensure robustness in the description of CL's cycles. Interannual cycles of the disease and the association of these cycles to climate variables were described using frequency and time-frequency techniques for time series analysis. We fitted linear models to the data using climatic predictors, and tested forecasting accuracy for several intervals of time. Forecasts were evaluated using “out of fit” data (i.e., data not used to fit the models). We showed that CL has cycles of approximately 3 y that are coherent with those of temperature and El Niño Southern Oscillation indices (Sea Surface Temperature 4 and Multivariate ENSO Index). Conclusions Linear models using temperature and MEI can predict satisfactorily CL incidence dynamics up to 12 mo ahead, with an accuracy that varies from 72% to 77% depending on prediction time. They clearly outperform simpler models with no climate predictors, a finding that further supports a dynamical link between the disease and climate. PMID:16903778
Srivastava, Saurabh Kumar; Singh, Sandeep Kumar; Suri, Jasjit S
2018-04-13
A machine learning (ML)-based text classification system has several classifiers. The performance evaluation (PE) of the ML system is typically driven by the training data size and the partition protocols used. Such systems lead to low accuracy because the text classification systems lack the ability to model the input text data in terms of noise characteristics. This research study proposes a concept of misrepresentation ratio (MRR) on input healthcare text data and models the PE criteria for validating the hypothesis. Further, such a novel system provides a platform to amalgamate several attributes of the ML system such as: data size, classifier type, partitioning protocol and percentage MRR. Our comprehensive data analysis consisted of five types of text data sets (TwitterA, WebKB4, Disease, Reuters (R8), and SMS); five kinds of classifiers (support vector machine with linear kernel (SVM-L), MLP-based neural network, AdaBoost, stochastic gradient descent and decision tree); and five types of training protocols (K2, K4, K5, K10 and JK). Using the decreasing order of MRR, our ML system demonstrates the mean classification accuracies as: 70.13 ± 0.15%, 87.34 ± 0.06%, 93.73 ± 0.03%, 94.45 ± 0.03% and 97.83 ± 0.01%, respectively, using all the classifiers and protocols. The corresponding AUC is 0.98 for SMS data using Multi-Layer Perceptron (MLP) based neural network. All the classifiers, the best accuracy of 91.84 ± 0.04% is shown to be of MLP-based neural network and this is 6% better over previously published. Further we observed that as MRR decreases, the system robustness increases and validated by standard deviations. The overall text system accuracy using all data types, classifiers, protocols is 89%, thereby showing the entire ML system to be novel, robust and unique. The system is also tested for stability and reliability.
Predictive accuracy of particle filtering in dynamic models supporting outbreak projections.
Safarishahrbijari, Anahita; Teyhouee, Aydin; Waldner, Cheryl; Liu, Juxin; Osgood, Nathaniel D
2017-09-26
While a new generation of computational statistics algorithms and availability of data streams raises the potential for recurrently regrounding dynamic models with incoming observations, the effectiveness of such arrangements can be highly subject to specifics of the configuration (e.g., frequency of sampling and representation of behaviour change), and there has been little attempt to identify effective configurations. Combining dynamic models with particle filtering, we explored a solution focusing on creating quickly formulated models regrounded automatically and recurrently as new data becomes available. Given a latent underlying case count, we assumed that observed incident case counts followed a negative binomial distribution. In accordance with the condensation algorithm, each such observation led to updating of particle weights. We evaluated the effectiveness of various particle filtering configurations against each other and against an approach without particle filtering according to the accuracy of the model in predicting future prevalence, given data to a certain point and a norm-based discrepancy metric. We examined the effectiveness of particle filtering under varying times between observations, negative binomial dispersion parameters, and rates with which the contact rate could evolve. We observed that more frequent observations of empirical data yielded super-linearly improved accuracy in model predictions. We further found that for the data studied here, the most favourable assumptions to make regarding the parameters associated with the negative binomial distribution and changes in contact rate were robust across observation frequency and the observation point in the outbreak. Combining dynamic models with particle filtering can perform well in projecting future evolution of an outbreak. Most importantly, the remarkable improvements in predictive accuracy resulting from more frequent sampling suggest that investments to achieve efficient reporting mechanisms may be more than paid back by improved planning capacity. The robustness of the results on particle filter configuration in this case study suggests that it may be possible to formulate effective standard guidelines and regularized approaches for such techniques in particular epidemiological contexts. Most importantly, the work tentatively suggests potential for health decision makers to secure strong guidance when anticipating outbreak evolution for emerging infectious diseases by combining even very rough models with particle filtering method.
Explicit asymmetric bounds for robust stability of continuous and discrete-time systems
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang; Antsaklis, Panos J.
1993-01-01
The problem of robust stability in linear systems with parametric uncertainties is considered. Explicit stability bounds on uncertain parameters are derived and expressed in terms of linear inequalities for continuous systems, and inequalities with quadratic terms for discrete-times systems. Cases where system parameters are nonlinear functions of an uncertainty are also examined.
NASA Technical Reports Server (NTRS)
Turso, James A.; Litt, Jonathan S.
2004-01-01
A method for accommodating engine deterioration via a scheduled Linear Parameter Varying Quadratic Lyapunov Function (LPVQLF)-Based controller is presented. The LPVQLF design methodology provides a means for developing unconditionally stable, robust control of Linear Parameter Varying (LPV) systems. The controller is scheduled on the Engine Deterioration Index, a function of estimated parameters that relate to engine health, and is computed using a multilayer feedforward neural network. Acceptable thrust response and tight control of exhaust gas temperature (EGT) is accomplished by adjusting the performance weights on these parameters for different levels of engine degradation. Nonlinear simulations demonstrate that the controller achieves specified performance objectives while being robust to engine deterioration as well as engine-to-engine variations.
Computational methods of robust controller design for aerodynamic flutter suppression
NASA Technical Reports Server (NTRS)
Anderson, L. R.
1981-01-01
The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.
Robust linear quadratic designs with respect to parameter uncertainty
NASA Technical Reports Server (NTRS)
Douglas, Joel; Athans, Michael
1992-01-01
The authors derive a linear quadratic regulator (LQR) which is robust to parametric uncertainty by using the overbounding method of I. R. Petersen and C. V. Hollot (1986). The resulting controller is determined from the solution of a single modified Riccati equation. It is shown that, when applied to a structural system, the controller gains add robustness by minimizing the potential energy of uncertain stiffness elements, and minimizing the rate of dissipation of energy through uncertain damping elements. A worst-case disturbance in the direction of the uncertainty is also considered. It is proved that performance robustness has been increased with the robust LQR when compared to a mismatched LQR design where the controller is designed on the nominal system, but applied to the actual uncertain system.
Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A; Raut, Rahul P; Choudhari, Vishnu P; Kuchekar, Bhanudas S
2011-07-01
To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60-360 ng/band for LOR and 30-180 ng/band for THIO with correlation coefficients r(2) = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7-101.2 %. The proposed method was optimized and validated as per the ICH guidelines.
A Simple RP-HPLC Method for Quantitation of Itopride HCl in Tablet Dosage Form.
Thiruvengada, Rajan Vs; Mohamed, Saleem Ts; Ramkanth, S; Alagusundaram, M; Ganaprakash, K; Madhusudhana, Chetty C
2010-10-01
An isocratic reversed phase high-performance liquid chromatographic method with ultraviolet detection at 220 nm has been developed for the quantification of itopride hydrochloride in tablet dosage form. The quantification was carried out using C(8) column (250 mm × 4.6 mm), 5-μm particle size SS column. The mobile phase comprised of two solvents (Solvent A: buffer 1.4 mL ortho-phosphoric acid adjusted to pH 3.0 with triethyl amine and Solvent B: acetonitrile). The ratio of Solvent A: Solvent B was 75:25 v/v. The flow rate was 1.0 mL (-1)with UV detection at 220 nm. The method has been validated and proved to be robust. The calibration curve was linear in the concentration range of 80-120% with coefficient of correlation 0.9995. The percentage recovery for itopride HCl was 100.01%. The proposed method was validated for its selectivity, linearity, accuracy, and precision. The method was found to be suitable for the quality control of itopride HCl in tablet dosage formulation.
A Simple RP-HPLC Method for Quantitation of Itopride HCl in Tablet Dosage Form
Thiruvengada, Rajan VS; Mohamed, Saleem TS; Ramkanth, S; Alagusundaram, M; Ganaprakash, K; Madhusudhana, Chetty C
2010-01-01
An isocratic reversed phase high-performance liquid chromatographic method with ultraviolet detection at 220 nm has been developed for the quantification of itopride hydrochloride in tablet dosage form. The quantification was carried out using C8 column (250 mm × 4.6 mm), 5-μm particle size SS column. The mobile phase comprised of two solvents (Solvent A: buffer 1.4 mL ortho-phosphoric acid adjusted to pH 3.0 with triethyl amine and Solvent B: acetonitrile). The ratio of Solvent A: Solvent B was 75:25 v/v. The flow rate was 1.0 mL -1with UV detection at 220 nm. The method has been validated and proved to be robust. The calibration curve was linear in the concentration range of 80-120% with coefficient of correlation 0.9995. The percentage recovery for itopride HCl was 100.01%. The proposed method was validated for its selectivity, linearity, accuracy, and precision. The method was found to be suitable for the quality control of itopride HCl in tablet dosage formulation. PMID:21264104
Patel, Archita; Macwana, Chhaya; Parmar, Vishal; Patel, Samir
2012-01-01
An accurate, simple, reproducible, and sensitive HPLC method was developed and validated for the simultaneous determination of atorvastatin calcium, ezetimibe, and fenofibrate in a tablet formulation. The analyses were performed on an RP C18 column, 150 x 4.60 mm id, 5 pm particle size. The mobile phase methanol-acetonitrile-water (76 + 13 + 11, v/v/v), was pumped at a constant flow rate of 1 mL/min. UV detection was performed at 253 nm. Retention times of atorvastatin calcium, ezetimibe, and fenofibrate were found to be 2.25, 3.68, and 6.41 min, respectively. The method was validated in terms of linearity, precision, accuracy, LOD, LOQ, and robustness. The response was linear in the range 2-10 microg/mL (r2 = 0.998) for atorvastatin calcium, 2-10 microg/mL (r2 = 0.998) for ezetimibe, and 40-120 microg/mL (r2 = 0.998) for fenofibrate. The developed method can be used for routine quality analysis of the drugs in the tablet formulation.
Jin, Xiaochen; Fu, Zhiqiang; Li, Xuehua; Chen, Jingwen
2017-03-22
The octanol-air partition coefficient (K OA ) is a key parameter describing the partition behavior of organic chemicals between air and environmental organic phases. As the experimental determination of K OA is costly, time-consuming and sometimes limited by the availability of authentic chemical standards for the compounds to be determined, it becomes necessary to develop credible predictive models for K OA . In this study, a polyparameter linear free energy relationship (pp-LFER) model for predicting K OA at 298.15 K and a novel model incorporating pp-LFERs with temperature (pp-LFER-T model) were developed from 795 log K OA values for 367 chemicals at different temperatures (263.15-323.15 K), and were evaluated with the OECD guidelines on QSAR model validation and applicability domain description. Statistical results show that both models are well-fitted, robust and have good predictive capabilities. Particularly, the pp-LFER model shows a strong predictive ability for polyfluoroalkyl substances and organosilicon compounds, and the pp-LFER-T model maintains a high predictive accuracy within a wide temperature range (263.15-323.15 K).
Decentralized robust nonlinear model predictive controller for unmanned aerial systems
NASA Astrophysics Data System (ADS)
Garcia Garreton, Gonzalo A.
The nonlinear and unsteady nature of aircraft aerodynamics together with limited practical range of controls and state variables make the use of the linear control theory inadequate especially in the presence of external disturbances, such as wind. In the classical approach, aircraft are controlled by multiple inner and outer loops, designed separately and sequentially. For unmanned aerial systems in particular, control technology must evolve to a point where autonomy is extended to the entire mission flight envelope. This requires advanced controllers that have sufficient robustness, track complex trajectories, and use all the vehicles control capabilities at higher levels of accuracy. In this work, a robust nonlinear model predictive controller is designed to command and control an unmanned aerial system to track complex tight trajectories in the presence of internal and external perturbance. The Flight System developed in this work achieves the above performance by using: 1. A nonlinear guidance algorithm that enables the vehicle to follow an arbitrary trajectory shaped by moving points; 2. A formulation that embeds the guidance logic and trajectory information in the aircraft model, avoiding cross coupling and control degradation; 3. An artificial neural network, designed to adaptively estimate and provide aerodynamic and propulsive forces in real-time; and 4. A mixed sensitivity approach that enhances the robustness for a nonlinear model predictive controller overcoming the effect of un-modeled dynamics, external disturbances such as wind, and measurement additive perturbations, such as noise and biases. These elements have been integrated and tested in simulation and with previously stored flight test data and shown to be feasible.
Improved blood glucose estimation through multi-sensor fusion.
Xiong, Feiyu; Hipszer, Brian R; Joseph, Jeffrey; Kam, Moshe
2011-01-01
Continuous glucose monitoring systems are an integral component of diabetes management. Efforts to improve the accuracy and robustness of these systems are at the forefront of diabetes research. Towards this goal, a multi-sensor approach was evaluated in hospitalized patients. In this paper, we report on a multi-sensor fusion algorithm to combine glucose sensor measurements in a retrospective fashion. The results demonstrate the algorithm's ability to improve the accuracy and robustness of the blood glucose estimation with current glucose sensor technology.
Feature weight estimation for gene selection: a local hyperlinear learning approach
2014-01-01
Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071
Welch, Leslie; Dong, Xiao; Hewitt, Daniel; Irwin, Michelle; McCarty, Luke; Tsai, Christina; Baginski, Tomasz
2018-06-02
Free thiol content, and its consistency, is one of the product quality attributes of interest during technical development of manufactured recombinant monoclonal antibodies (mAbs). We describe a new, mid/high-throughput reversed-phase-high performance liquid chromatography (RP-HPLC) method coupled with derivatization of free thiols, for the determination of total free thiol content in an E. coli-expressed therapeutic monovalent monoclonal antibody mAb1. Initial selection of the derivatization reagent used an hydrophobicity-tailored approach. Maleimide-based thiol-reactive reagents with varying degrees of hydrophobicity were assessed to identify and select one that provided adequate chromatographic resolution and robust quantitation of free thiol-containing mAb1 forms. The method relies on covalent derivatization of free thiols in denatured mAb1 with N-tert-butylmaleimide (NtBM) label, followed by RP-HPLC separation with UV-based quantitation of native (disulfide containing) and labeled (free thiol containing) forms. The method demonstrated good specificity, precision, linearity, accuracy and robustness. Accuracy of the method, for samples with a wide range of free thiol content, was demonstrated using admixtures as well as by comparison to an orthogonal LC-MS peptide mapping method with isotope tagging of free thiols. The developed method has a facile workflow which fits well into both R&D characterization and quality control (QC) testing environments. The hydrophobicity-tailored approach to the selection of free thiol derivatization reagent is easily applied to the rapid development of free thiol quantitation methods for full-length recombinant antibodies. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Xue; Li, Xiaohui; Yu, Xin; Chen, Deying; Liu, Aichun
2018-01-01
Diagnosis of malignancies is a challenging clinical issue. In this work, we present quick and robust diagnosis and discrimination of lymphoma and multiple myeloma (MM) using laser-induced breakdown spectroscopy (LIBS) conducted on human serum samples, in combination with chemometric methods. The serum samples collected from lymphoma and MM cancer patients and healthy controls were deposited on filter papers and ablated with a pulsed 1064 nm Nd:YAG laser. 24 atomic lines of Ca, Na, K, H, O, and N were selected for malignancy diagnosis. Principal component analysis (PCA), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and k nearest neighbors (kNN) classification were applied to build the malignancy diagnosis and discrimination models. The performances of the models were evaluated using 10-fold cross validation. The discrimination accuracy, confusion matrix and receiver operating characteristic (ROC) curves were obtained. The values of area under the ROC curve (AUC), sensitivity and specificity at the cut-points were determined. The kNN model exhibits the best performances with overall discrimination accuracy of 96.0%. Distinct discrimination between malignancies and healthy controls has been achieved with AUC, sensitivity and specificity for healthy controls all approaching 1. For lymphoma, the best discrimination performance values are AUC = 0.990, sensitivity = 0.970 and specificity = 0.956. For MM, the corresponding values are AUC = 0.986, sensitivity = 0.892 and specificity = 0.994. The results show that the serum-LIBS technique can serve as a quick, less invasive and robust method for diagnosis and discrimination of human malignancies.
Robust Computation of Linear Models, or How to Find a Needle in a Haystack
2012-02-17
robustly, project it onto a sphere, and then apply standard PCA. This approach is due to [LMS+99]. Maronna et al . [MMY06] recommend it as a preferred...of this form is due to Chandrasekaran et al . [CSPW11]. Given an observed matrix X, they propose to solve the semidefinite problem minimize ‖P ‖S1 + γ...regularization parameter γ negotiates a tradeoff between the two goals. Candès et al . [CLMW11] study the performance of (2.1) for robust linear
Practical aspects of estimating energy components in rodents
van Klinken, Jan B.; van den Berg, Sjoerd A. A.; van Dijk, Ko Willems
2013-01-01
Recently there has been an increasing interest in exploiting computational and statistical techniques for the purpose of component analysis of indirect calorimetry data. Using these methods it becomes possible to dissect daily energy expenditure into its components and to assess the dynamic response of the resting metabolic rate (RMR) to nutritional and pharmacological manipulations. To perform robust component analysis, however, is not straightforward and typically requires the tuning of parameters and the preprocessing of data. Moreover the degree of accuracy that can be attained by these methods depends on the configuration of the system, which must be properly taken into account when setting up experimental studies. Here, we review the methods of Kalman filtering, linear, and penalized spline regression, and minimal energy expenditure estimation in the context of component analysis and discuss their results on high resolution datasets from mice and rats. In addition, we investigate the effect of the sample time, the accuracy of the activity sensor, and the washout time of the chamber on the estimation accuracy. We found that on the high resolution data there was a strong correlation between the results of Kalman filtering and penalized spline (P-spline) regression, except for the activity respiratory quotient (RQ). For low resolution data the basal metabolic rate (BMR) and resting RQ could still be estimated accurately with P-spline regression, having a strong correlation with the high resolution estimate (R2 > 0.997; sample time of 9 min). In contrast, the thermic effect of food (TEF) and activity related energy expenditure (AEE) were more sensitive to a reduction in the sample rate (R2 > 0.97). In conclusion, for component analysis on data generated by single channel systems with continuous data acquisition both Kalman filtering and P-spline regression can be used, while for low resolution data from multichannel systems P-spline regression gives more robust results. PMID:23641217
Ruuska, Salla; Hämäläinen, Wilhelmiina; Kajava, Sari; Mughal, Mikaela; Matilainen, Pekka; Mononen, Jaakko
2018-03-01
The aim of the present study was to evaluate empirically confusion matrices in device validation. We compared the confusion matrix method to linear regression and error indices in the validation of a device measuring feeding behaviour of dairy cattle. In addition, we studied how to extract additional information on classification errors with confusion probabilities. The data consisted of 12 h behaviour measurements from five dairy cows; feeding and other behaviour were detected simultaneously with a device and from video recordings. The resulting 216 000 pairs of classifications were used to construct confusion matrices and calculate performance measures. In addition, hourly durations of each behaviour were calculated and the accuracy of measurements was evaluated with linear regression and error indices. All three validation methods agreed when the behaviour was detected very accurately or inaccurately. Otherwise, in the intermediate cases, the confusion matrix method and error indices produced relatively concordant results, but the linear regression method often disagreed with them. Our study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors. Copyright © 2018 Elsevier B.V. All rights reserved.
Elzoghby, Mostafa; Li, Fu; Arafa, Ibrahim. I.; Arif, Usman
2017-01-01
Information fusion from multiple sensors ensures the accuracy and robustness of a navigation system, especially in the absence of global positioning system (GPS) data which gets degraded in many cases. A way to deal with multi-mode estimation for a small fixed wing unmanned aerial vehicle (UAV) localization framework is proposed, which depends on utilizing a Luenberger observer-based linear matrix inequality (LMI) approach. The proposed estimation technique relies on the interaction between multiple measurement modes and a continuous observer. The state estimation is performed in a switching environment between multiple active sensors to exploit the available information as much as possible, especially in GPS-denied environments. Luenberger observer-based projection is implemented as a continuous observer to optimize the estimation performance. The observer gain might be chosen by solving a Lyapunov equation by means of a LMI algorithm. Convergence is achieved by utilizing the linear matrix inequality (LMI), based on Lyapunov stability which keeps the dynamic estimation error bounded by selecting the observer gain matrix (L). Simulation results are presented for a small UAV fixed wing localization problem. The results obtained using the proposed approach are compared with a single mode Extended Kalman Filter (EKF). Simulation results are presented to demonstrate the viability of the proposed strategy. PMID:28420214
Supervised linear dimensionality reduction with robust margins for object recognition
NASA Astrophysics Data System (ADS)
Dornaika, F.; Assoum, A.
2013-01-01
Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edmunds, D; Donovan, E
Purpose: To determine whether the Microsoft Kinect Version 2 (Kinect v2), a commercial off-the-shelf (COTS) depth sensors designed for entertainment purposes, were robust to the radiotherapy treatment environment and could be suitable for monitoring of voluntary breath-hold compliance. This could complement current visual monitoring techniques, and be useful for heart sparing left breast radiotherapy. Methods: In-house software to control Kinect v2 sensors, and capture output information, was developed using the free Microsoft software development kit, and the Cinder creative coding C++ library. Each sensor was used with a 12m USB 3.0 active cable. A solid water block was used asmore » the object. The depth accuracy and precision of the sensors was evaluated by comparing Kinect reported distance to the object with a precision laser measurement across a distance range of 0.6m to 2.0 m. The object was positioned on a high-precision programmable motion platform and moved in two programmed motion patterns and Kinect reported distance logged. Robustness to the radiation environment was tested by repeating all measurements with a linear accelerator operating over a range of pulse repetition frequencies (6Hz to 400Hz) and dose rates 50 to 1500 monitor units (MU) per minute. Results: The complex, consistent relationship between true and measured distance was unaffected by the radiation environment, as was the ability to detect motion. Sensor precision was < 1 mm and the accuracy between 1.3 mm and 1.8 mm when a distance correction was applied. Both motion patterns were tracked successfully with a root mean squared error (RMSE) of 1.4 and 1.1 mm respectively. Conclusion: Kinect v2 sensors are capable of tracking pre-programmed motion patterns with an accuracy <2 mm and appear robust to the radiotherapy treatment environment. A clinical trial using Kinect v2 sensor for monitoring voluntary breath hold has ethical approval and is open to recruitment. The authors are supported by a National Institute of Health Research (NIHR) Career Development Fellowship (CDF-2013-06-005). Microsoft Corporation donated three sensors. The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health.« less
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.
Robust control of electrostatic torsional micromirrors using adaptive sliding-mode control
NASA Astrophysics Data System (ADS)
Sane, Harshad S.; Yazdi, Navid; Mastrangelo, Carlos H.
2005-01-01
This paper presents high-resolution control of torsional electrostatic micromirrors beyond their inherent pull-in instability using robust sliding-mode control (SMC). The objectives of this paper are two-fold - firstly, to demonstrate the applicability of SMC for MEMS devices; secondly - to present a modified SMC algorithm that yields improved control accuracy. SMC enables compact realization of a robust controller tolerant of device characteristic variations and nonlinearities. Robustness of the control loop is demonstrated through extensive simulations and measurements on MEMS with a wide range in their characteristics. Control of two-axis gimbaled micromirrors beyond their pull-in instability with overall 10-bit pointing accuracy is confirmed experimentally. In addition, this paper presents an analysis of the sources of errors in discrete-time implementation of the control algorithm. To minimize these errors, we present an adaptive version of the SMC algorithm that yields substantial performance improvement without considerably increasing implementation complexity.
Robust synthetic biology design: stochastic game theory approach.
Chen, Bor-Sen; Chang, Chia-Hung; Lee, Hsiao-Ching
2009-07-15
Synthetic biology is to engineer artificial biological systems to investigate natural biological phenomena and for a variety of applications. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to uncertain initial conditions and disturbances of extra-cellular environments on the host cell. At present, how to design a robust synthetic gene network to work properly under these uncertain factors is the most important topic of synthetic biology. A robust regulation design is proposed for a stochastic synthetic gene network to achieve the prescribed steady states under these uncertain factors from the minimax regulation perspective. This minimax regulation design problem can be transformed to an equivalent stochastic game problem. Since it is not easy to solve the robust regulation design problem of synthetic gene networks by non-linear stochastic game method directly, the Takagi-Sugeno (T-S) fuzzy model is proposed to approximate the non-linear synthetic gene network via the linear matrix inequality (LMI) technique through the Robust Control Toolbox in Matlab. Finally, an in silico example is given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed robust gene design method. http://www.ee.nthu.edu.tw/bschen/SyntheticBioDesign_supplement.pdf.
Three-dimensional repositioning accuracy of semiadjustable articulator cast mounting systems.
Tan, Ming Yi; Ung, Justina Youlin; Low, Ada Hui Yin; Tan, En En; Tan, Keson Beng Choon
2014-10-01
In spite of its importance in prosthesis precision and quality, the 3-dimensional repositioning accuracy of cast mounting systems has not been reported in detail. The purpose of this study was to quantify the 3-dimensional repositioning accuracy of 6 selected cast mounting systems. Five magnetic mounting systems were compared with a conventional screw-on system. Six systems on 3 semiadjustable articulators were evaluated: Denar Mark II with conventional screw-on mounting plates (DENSCR) and magnetic mounting system with converter plates (DENCON); Denar Mark 330 with in-built magnetic mounting system (DENMAG) and disposable mounting plates; and Artex CP with blue (ARTBLU), white (ARTWHI), and black (ARTBLA) magnetic mounting plates. Test casts with 3 high-precision ceramic ball bearings at the mandibular central incisor (Point I) and the right and left second molar (Point R; Point L) positions were mounted on 5 mounting plates (n=5) for all 6 systems. Each cast was repositioned 10 times by 4 operators in random order. Nine linear (Ix, Iy, Iz; Rx, Ry, Rz; Lx, Ly, Lz) and 3 angular (anteroposterior, mediolateral, twisting) displacements were measured with a coordinate measuring machine. The mean standard deviations of the linear and angular displacements defined repositioning accuracy. Anteroposterior linear repositioning accuracy ranged from 23.8 ±3.7 μm (DENCON) to 4.9 ±3.2 μm (DENSCR). Mediolateral linear repositioning accuracy ranged from 46.0 ±8.0 μm (DENCON) to 3.7 ±1.5 μm (ARTBLU), and vertical linear repositioning accuracy ranged from 7.2 ±9.6 μm (DENMAG) to 1.5 ±0.9 μm (ARTBLU). Anteroposterior angular repositioning accuracy ranged from 0.0084 ±0.0080 degrees (DENCON) to 0.0020 ±0.0006 degrees (ARTBLU), and mediolateral angular repositioning accuracy ranged from 0.0120 ±0.0111 degrees (ARTWHI) to 0.0027 ±0.0008 degrees (ARTBLU). Twisting angular repositioning accuracy ranged from 0.0419 ±0.0176 degrees (DENCON) to 0.0042 ±0.0038 degrees (ARTBLA). One-way ANOVA found significant differences (P<.05) among all systems for Iy, Ry, Lx, Ly, and twisting. Generally, vertical linear displacements were less likely to reach the threshold of clinical detectability compared with anteroposterior or mediolateral linear displacements. The overall repositioning accuracy of DENSCR was comparable with 4 magnetic mounting systems (DENMAG, ARTBLU, ARTWHI, ARTBLA). DENCON exhibited the worst repositioning accuracy for Iy, Ry, Lx, Ly, and twisting. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Genomic prediction based on data from three layer lines using non-linear regression models.
Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L
2014-11-06
Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.
Robust Stabilization of Uncertain Systems Based on Energy Dissipation Concepts
NASA Technical Reports Server (NTRS)
Gupta, Sandeep
1996-01-01
Robust stability conditions obtained through generalization of the notion of energy dissipation in physical systems are discussed in this report. Linear time-invariant (LTI) systems which dissipate energy corresponding to quadratic power functions are characterized in the time-domain and the frequency-domain, in terms of linear matrix inequalities (LMls) and algebraic Riccati equations (ARE's). A novel characterization of strictly dissipative LTI systems is introduced in this report. Sufficient conditions in terms of dissipativity and strict dissipativity are presented for (1) stability of the feedback interconnection of dissipative LTI systems, (2) stability of dissipative LTI systems with memoryless feedback nonlinearities, and (3) quadratic stability of uncertain linear systems. It is demonstrated that the framework of dissipative LTI systems investigated in this report unifies and extends small gain, passivity, and sector conditions for stability. Techniques for selecting power functions for characterization of uncertain plants and robust controller synthesis based on these stability results are introduced. A spring-mass-damper example is used to illustrate the application of these methods for robust controller synthesis.
Robust L1-norm two-dimensional linear discriminant analysis.
Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang
2015-05-01
In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.
A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures
2014-01-01
Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in this work are freely available at http://www.cs.ubc.ca/~hjabbari/software.php. PMID:24884954
Deep Coupled Integration of CSAC and GNSS for Robust PNT.
Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi
2015-09-11
Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. "Clock coasting" of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT.
Deep Coupled Integration of CSAC and GNSS for Robust PNT
Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi
2015-01-01
Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. “Clock coasting” of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT. PMID:26378542
An embedded system for face classification in infrared video using sparse representation
NASA Astrophysics Data System (ADS)
Saavedra M., Antonio; Pezoa, Jorge E.; Zarkesh-Ha, Payman; Figueroa, Miguel
2017-09-01
We propose a platform for robust face recognition in Infrared (IR) images using Compressive Sensing (CS). In line with CS theory, the classification problem is solved using a sparse representation framework, where test images are modeled by means of a linear combination of the training set. Because the training set constitutes an over-complete dictionary, we identify new images by finding their sparsest representation based on the training set, using standard l1-minimization algorithms. Unlike conventional face-recognition algorithms, we feature extraction is performed using random projections with a precomputed binary matrix, as proposed in the CS literature. This random sampling reduces the effects of noise and occlusions such as facial hair, eyeglasses, and disguises, which are notoriously challenging in IR images. Thus, the performance of our framework is robust to these noise and occlusion factors, achieving an average accuracy of approximately 90% when the UCHThermalFace database is used for training and testing purposes. We implemented our framework on a high-performance embedded digital system, where the computation of the sparse representation of IR images was performed by a dedicated hardware using a deeply pipelined architecture on an Field-Programmable Gate Array (FPGA).
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
The Use of Linear Programming for Prediction.
ERIC Educational Resources Information Center
Schnittjer, Carl J.
The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)
da Rosa, Hemerson S; Koetz, Mariana; Santos, Marí Castro; Jandrey, Elisa Helena Farias; Folmer, Vanderlei; Henriques, Amélia Teresinha; Mendez, Andreas Sebastian Loureiro
2018-04-01
Sida tuberculata (ST) is a Malvaceae species widely distributed in Southern Brazil. In traditional medicine, ST has been employed as hypoglycemic, hypocholesterolemic, anti-inflammatory and antimicrobial. Additionally, this species is chemically characterized by flavonoids, alkaloids and phytoecdysteroids mainly. The present work aimed to optimize the extractive technique and to validate an UHPLC method for the determination of 20-hydroxyecdsone (20HE) in the ST leaves. Box-Behnken Design (BBD) was used in method optimization. The extractive methods tested were: static and dynamic maceration, ultrasound, ultra-turrax and reflux. In the Box-Behnken three parameters were evaluated in three levels (-1, 0, +1), particle size, time and plant:solvent ratio. In validation method, the parameters of selectivity, specificity, linearity, limits of detection and quantification (LOD, LOQ), precision, accuracy and robustness were evaluated. The results indicate static maceration as better technique to obtain 20HE peak area in ST extract. The optimal extraction from surface response methodology was achieved with the parameters granulometry of 710 nm, 9 days of maceration and plant:solvent ratio 1:54 (w/v). The UHPLC-PDA analytical developed method showed full viability of performance, proving to be selective, linear, precise, accurate and robust for 20HE detection in ST leaves. The average content of 20HE was 0.56% per dry extract. Thus, the optimization of extractive method in ST leaves increased the concentration of 20HE in crude extract, and a reliable method was successfully developed according to validation requirements and in agreement with current legislation. Copyright © 2018 Elsevier Inc. All rights reserved.
Optics measurement and correction for the Relativistic Heavy Ion Collider
NASA Astrophysics Data System (ADS)
Shen, Xiaozhe
The quality of beam optics is of great importance for the performance of a high energy accelerator like the Relativistic Heavy Ion Collider (RHIC). The turn-by-turn (TBT) beam position monitor (BPM) data can be used to derive beam optics. However, the accuracy of the derived beam optics is often limited by the performance and imperfections of instruments as well as measurement methods and conditions. Therefore, a robust and model-independent data analysis method is highly desired to extract noise-free information from TBT BPM data. As a robust signal-processing technique, an independent component analysis (ICA) algorithm called second order blind identification (SOBI) has been proven to be particularly efficient in extracting physical beam signals from TBT BPM data even in the presence of instrument's noise and error. We applied the SOBI ICA algorithm to RHIC during the 2013 polarized proton operation to extract accurate linear optics from TBT BPM data of AC dipole driven coherent beam oscillation. From the same data, a first systematic estimation of RHIC BPM noise performance was also obtained by the SOBI ICA algorithm, and showed a good agreement with the RHIC BPM configurations. Based on the accurate linear optics measurement, a beta-beat response matrix correction method and a scheme of using horizontal closed orbit bumps at sextupoles for arc beta-beat correction were successfully applied to reach a record-low beam optics error at RHIC. This thesis presents principles of the SOBI ICA algorithm and theory as well as experimental results of optics measurement and correction at RHIC.
Adding flexibility to the search for robust portfolios in non-linear water resource planning
NASA Astrophysics Data System (ADS)
Tomlinson, James; Harou, Julien
2017-04-01
To date robust optimisation of water supply systems has sought to find portfolios or strategies that are robust to a range of uncertainties or scenarios. The search for a single portfolio that is robust in all scenarios is necessarily suboptimal compared to portfolios optimised for a single scenario deterministic future. By contrast establishing a separate portfolio for each future scenario is unhelpful to the planner who must make a single decision today under deep uncertainty. In this work we show that a middle ground is possible by allowing a small number of different portfolios to be found that are each robust to a different subset of the global scenarios. We use evolutionary algorithms and a simple water resource system model to demonstrate this approach. The primary contribution is to demonstrate that flexibility can be added to the search for portfolios, in complex non-linear systems, at the expense of complete robustness across all future scenarios. In this context we define flexibility as the ability to design a portfolio in which some decisions are delayed, but those decisions that are not delayed are themselves shown to be robust to the future. We recognise that some decisions in our portfolio are more important than others. An adaptive portfolio is found by allowing no flexibility for these near-term "important" decisions, but maintaining flexibility in the remaining longer term decisions. In this sense we create an effective 2-stage decision process for a non-linear water resource supply system. We show how this reduces a measure of regret versus the inflexible robust solution for the same system.
Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li
2011-01-01
Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184
Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li
2011-02-16
Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.
Tian, Zhen; Yuan, Jingqi; Xu, Liang; Zhang, Xiang; Wang, Jingcheng
2018-05-25
As higher requirements are proposed for the load regulation and efficiency enhancement, the control performance of boiler-turbine systems has become much more important. In this paper, a novel robust control approach is proposed to improve the coordinated control performance for subcritical boiler-turbine units. To capture the key features of the boiler-turbine system, a nonlinear control-oriented model is established and validated with the history operation data of a 300 MW unit. To achieve system linearization and decoupling, an adaptive feedback linearization strategy is proposed, which could asymptotically eliminate the linearization error caused by the model uncertainties. Based on the linearized boiler-turbine system, a second-order sliding mode controller is designed with the super-twisting algorithm. Moreover, the closed-loop system is proved robustly stable with respect to uncertainties and disturbances. Simulation results are presented to illustrate the effectiveness of the proposed control scheme, which achieves excellent tracking performance, strong robustness and chattering reduction. Copyright © 2018. Published by Elsevier Ltd.
GEOSPATIAL DATA ACCURACY ASSESSMENT
The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...
Ranking and combining multiple predictors without labeled data
Parisi, Fabio; Strino, Francesco; Nadler, Boaz; Kluger, Yuval
2014-01-01
In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier’s accuracy can be assessed using available labeled data, and raises two questions: Given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to (i) reliably rank them and (ii) construct a metaclassifier more accurate than most classifiers in the ensemble? Here we present a spectral approach to address these questions. First, assuming conditional independence between classifiers, we show that the off-diagonal entries of their covariance matrix correspond to a rank-one matrix. Moreover, the classifiers can be ranked using the leading eigenvector of this covariance matrix, because its entries are proportional to their balanced accuracies. Second, via a linear approximation to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML), an unsupervised ensemble classifier whose weights are equal to these eigenvector entries. On both simulated and real data, SML typically achieves a higher accuracy than most classifiers in the ensemble and can provide a better starting point than majority voting for estimating the maximum likelihood solution. Furthermore, SML is robust to the presence of small malicious groups of classifiers designed to veer the ensemble prediction away from the (unknown) ground truth. PMID:24474744
Li, Hui; Liu, Liying; Lin, Zhili; Wang, Qiwei; Wang, Xiao; Feng, Lishuang
2018-01-22
A new double closed-loop control system with mean-square exponential stability is firstly proposed to optimize the detection accuracy and dynamic response characteristic of the integrated optical resonance gyroscope (IORG). The influence mechanism of optical nonlinear effects on system detection sensitivity is investigated to optimize the demodulation gain, the maximum sensitivity and the linear work region of a gyro system. Especially, we analyze the effect of optical parameter fluctuation on the parameter uncertainty of system, and investigate the influence principle of laser locking-frequency noise on the closed-loop detection accuracy of angular velocity. The stochastic disturbance model of double closed-loop IORG is established that takes the unfavorable factors such as optical effect nonlinearity, disturbed disturbance, optical parameter fluctuation and unavoidable system noise into consideration. A robust control algorithm is also designed to guarantee the mean-square exponential stability of system with a prescribed H ∞ performance in order to improve the detection accuracy and dynamic performance of IORG. The conducted experiment results demonstrate that the IORG has a dynamic response time less than 76us, a long-term bias stability 7.04°/h with an integration time of 10s over one-hour test, and the corresponding bias stability 1.841°/h based on Allan deviation, which validate the effectiveness and usefulness of the proposed detection scheme.
Optimization and qualification of an Fc Array assay for assessments of antibodies against HIV-1/SIV.
Brown, Eric P; Weiner, Joshua A; Lin, Shu; Natarajan, Harini; Normandin, Erica; Barouch, Dan H; Alter, Galit; Sarzotti-Kelsoe, Marcella; Ackerman, Margaret E
2018-04-01
The Fc Array is a multiplexed assay that assesses the Fc domain characteristics of antigen-specific antibodies with the potential to evaluate up to 500 antigen specificities simultaneously. Antigen-specific antibodies are captured on antigen-conjugated beads and their functional capacity is probed via an array of Fc-binding proteins including antibody subclassing reagents, Fcγ receptors, complement proteins, and lectins. Here we present the results of the optimization and formal qualification of the Fc Array, performed in compliance with Good Clinical Laboratory Practice (GCLP) guidelines. Assay conditions were optimized for performance and reproducibility, and the final version of the assay was then evaluated for specificity, accuracy, precision, limits of detection and quantitation, linearity, range and robustness. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Marker optimization for facial motion acquisition and deformation.
Le, Binh H; Zhu, Mingyang; Deng, Zhigang
2013-11-01
A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.
NASA Astrophysics Data System (ADS)
Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman
2017-06-01
Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.
Gap-filling methods to impute eddy covariance flux data by preserving variance.
NASA Astrophysics Data System (ADS)
Kunwor, S.; Staudhammer, C. L.; Starr, G.; Loescher, H. W.
2015-12-01
To represent carbon dynamics, in terms of exchange of CO2 between the terrestrial ecosystem and the atmosphere, eddy covariance (EC) data has been collected using eddy flux towers from various sites across globe for more than two decades. However, measurements from EC data are missing for various reasons: precipitation, routine maintenance, or lack of vertical turbulence. In order to have estimates of net ecosystem exchange of carbon dioxide (NEE) with high precision and accuracy, robust gap-filling methods to impute missing data are required. While the methods used so far have provided robust estimates of the mean value of NEE, little attention has been paid to preserving the variance structures embodied by the flux data. Preserving the variance of these data will provide unbiased and precise estimates of NEE over time, which mimic natural fluctuations. We used a non-linear regression approach with moving windows of different lengths (15, 30, and 60-days) to estimate non-linear regression parameters for one year of flux data from a long-leaf pine site at the Joseph Jones Ecological Research Center. We used as our base the Michaelis-Menten and Van't Hoff functions. We assessed the potential physiological drivers of these parameters with linear models using micrometeorological predictors. We then used a parameter prediction approach to refine the non-linear gap-filling equations based on micrometeorological conditions. This provides us an opportunity to incorporate additional variables, such as vapor pressure deficit (VPD) and volumetric water content (VWC) into the equations. Our preliminary results indicate that improvements in gap-filling can be gained with a 30-day moving window with additional micrometeorological predictors (as indicated by lower root mean square error (RMSE) of the predicted values of NEE). Our next steps are to use these parameter predictions from moving windows to gap-fill the data with and without incorporation of potential driver variables of the parameters traditionally used. Then, comparisons of the predicted values from these methods and 'traditional' gap-filling methods (using 12 fixed monthly windows) will be assessed to show the scale of preserving variance. Further, this method will be applied to impute artificially created gaps for analyzing if variance is preserved.
Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A
2017-02-01
This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p < 0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p < 0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.
Robust nonlinear control of vectored thrust aircraft
NASA Technical Reports Server (NTRS)
Doyle, John C.; Murray, Richard; Morris, John
1993-01-01
An interdisciplinary program in robust control for nonlinear systems with applications to a variety of engineering problems is outlined. Major emphasis will be placed on flight control, with both experimental and analytical studies. This program builds on recent new results in control theory for stability, stabilization, robust stability, robust performance, synthesis, and model reduction in a unified framework using Linear Fractional Transformations (LFT's), Linear Matrix Inequalities (LMI's), and the structured singular value micron. Most of these new advances have been accomplished by the Caltech controls group independently or in collaboration with researchers in other institutions. These recent results offer a new and remarkably unified framework for all aspects of robust control, but what is particularly important for this program is that they also have important implications for system identification and control of nonlinear systems. This combines well with Caltech's expertise in nonlinear control theory, both in geometric methods and methods for systems with constraints and saturations.
NASA Astrophysics Data System (ADS)
Toro, E. F.; Titarev, V. A.
2005-01-01
In this paper we develop non-linear ADER schemes for time-dependent scalar linear and non-linear conservation laws in one-, two- and three-space dimensions. Numerical results of schemes of up to fifth order of accuracy in both time and space illustrate that the designed order of accuracy is achieved in all space dimensions for a fixed Courant number and essentially non-oscillatory results are obtained for solutions with discontinuities. We also present preliminary results for two-dimensional non-linear systems.
Exploration of robust operating conditions in inductively coupled plasma mass spectrometry
NASA Astrophysics Data System (ADS)
Tromp, John W.; Pomares, Mario; Alvarez-Prieto, Manuel; Cole, Amanda; Ying, Hai; Salin, Eric D.
2003-11-01
'Robust' conditions, as defined by Mermet and co-workers for inductively coupled plasma (ICP)-atomic emission spectrometry, minimize matrix effects on analyte signals, and are obtained by increasing power and reducing nebulizer gas flow. In ICP-mass spectrometry (MS), it is known that reduced nebulizer gas flow usually leads to more robust conditions such that matrix effects are reduced. In this work, robust conditions for ICP-MS have been determined by optimizing for accuracy in the determination of analytes in a multi-element solution with various interferents (Al, Ba, Cs, K, Na), by varying power, nebulizer gas flow, sample introduction rate and ion lens voltage. The goal of the work was to determine which operating parameters were the most important in reducing matrix effects, and whether different interferents yielded the same robust conditions. Reduction in nebulizer gas flow and in sample input rate led to a significantly decreased interference, while an increase in power seemed to have a lesser effect. Once the other parameters had been adjusted to their robust values, there was no additional improvement in accuracy attainable by adjusting the ion lens voltage. The robust conditions were universal, since, for all the interferents and analytes studied, the optimum was found at the same operating conditions. One drawback to the use of robust conditions was the slightly reduced sensitivity; however, in the context of 'intelligent' instruments, the concept of 'robust conditions' is useful in many cases.
Multi-scale graph-cut algorithm for efficient water-fat separation.
Berglund, Johan; Skorpil, Mikael
2017-09-01
To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Patel, R. V.; Toda, M.; Sridhar, B.
1977-01-01
The paper deals with the problem of expressing the robustness (stability) property of a linear quadratic state feedback (LQSF) design quantitatively in terms of bounds on the perturbations (modeling errors or parameter variations) in the system matrices so that the closed-loop system remains stable. Nonlinear time-varying and linear time-invariant perturbations are considered. The only computation required in obtaining a measure of the robustness of an LQSF design is to determine the eigenvalues of two symmetric matrices determined when solving the algebraic Riccati equation corresponding to the LQSF design problem. Results are applied to a complex dynamic system consisting of the flare control of a STOL aircraft. The design of the flare control is formulated as an LQSF tracking problem.
NASA Astrophysics Data System (ADS)
Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram
2018-05-01
The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic errors. These results also demonstrate the high potential of linear features as reliable control features to reach sub-pixel accuracy in registration applications.
Dissipative rendering and neural network control system design
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.
1995-01-01
Model-based control system designs are limited by the accuracy of the models of the plant, plant uncertainty, and exogenous signals. Although better models can be obtained with system identification, the models and control designs still have limitations. One approach to reduce the dependency on particular models is to design a set of compensators that will guarantee robust stability to a set of plants. Optimization over the compensator parameters can then be used to get the desired performance. Conservativeness of this approach can be reduced by integrating fundamental properties of the plant models. This is the approach of dissipative control design. Dissipative control designs are based on several variations of the Passivity Theorem, which have been proven for nonlinear/linear and continuous-time/discrete-time systems. These theorems depend not on a specific model of a plant, but on its general dissipative properties. Dissipative control design has found wide applicability in flexible space structures and robotic systems that can be configured to be dissipative. Currently, there is ongoing research to improve the performance of dissipative control designs. For aircraft systems that are not dissipative active control may be used to make them dissipative and then a dissipative control design technique can be used. It is also possible that rendering a system dissipative and dissipative control design may be combined into one step. Furthermore, the transformation of a non-dissipative system to dissipative can be done robustly. One sequential design procedure for finite dimensional linear time-invariant systems has been developed. For nonlinear plants that cannot be controlled adequately with a single linear controller, model-based techniques have additional problems. Nonlinear system identification is still a research topic. Lacking analytical models for model-based design, artificial neural network algorithms have recently received considerable attention. Using their universal approximation property, neural networks have been introduced into nonlinear control designs in several ways. Unfortunately, little work has appeared that analyzes neural network control systems and establishes margins for stability and performance. One approach for this analysis is to set up neural network control systems in the framework presented above. For example, one neural network could be used to render a system to be dissipative, a second strictly dissipative neural network controller could be used to guarantee robust stability.
The Influence of Delaying Judgments of Learning on Metacognitive Accuracy: A Meta-Analytic Review
ERIC Educational Resources Information Center
Rhodes, Matthew G.; Tauber, Sarah K.
2011-01-01
Many studies have examined the accuracy of predictions of future memory performance solicited through judgments of learning (JOLs). Among the most robust findings in this literature is that delaying predictions serves to substantially increase the relative accuracy of JOLs compared with soliciting JOLs immediately after study, a finding termed the…
ERIC Educational Resources Information Center
Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.
2009-01-01
The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…
Position Accuracy Analysis of a Robust Vision-Based Navigation
NASA Astrophysics Data System (ADS)
Gaglione, S.; Del Pizzo, S.; Troisi, S.; Angrisano, A.
2018-05-01
Using images to determine camera position and attitude is a consolidated method, very widespread for application like UAV navigation. In harsh environment, where GNSS could be degraded or denied, image-based positioning could represent a possible candidate for an integrated or alternative system. In this paper, such method is investigated using a system based on single camera and 3D maps. A robust estimation method is proposed in order to limit the effect of blunders or noisy measurements on position solution. The proposed approach is tested using images collected in an urban canyon, where GNSS positioning is very unaccurate. A previous photogrammetry survey has been performed to build the 3D model of tested area. The position accuracy analysis is performed and the effect of the robust method proposed is validated.
Four decades of implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B.
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Yao, Mingyin; Yang, Hui; Huang, Lin; Chen, Tianbing; Rao, Gangfu; Liu, Muhua
2017-05-10
In seeking a novel method with the ability of green analysis in monitoring toxic heavy metals residue in fresh leafy vegetables, laser-induced breakdown spectroscopy (LIBS) was applied to prove its capability in performing this work. The spectra of fresh vegetable samples polluted in the lab were collected by optimized LIBS experimental setup, and the reference concentrations of cadmium (Cd) from samples were obtained by conventional atomic absorption spectroscopy after wet digestion. The direct calibration employing intensity of single Cd line and Cd concentration exposed the weakness of this calibration method. Furthermore, the accuracy of linear calibration can be improved a little by triple Cd lines as characteristic variables, especially after the spectra were pretreated. However, it is not enough in predicting Cd in samples. Therefore, partial least-squares regression (PLSR) was utilized to enhance the robustness of quantitative analysis. The results of the PLSR model showed that the prediction accuracy of the Cd target can meet the requirement of determination in food safety. This investigation presented that LIBS is a promising and emerging method in analyzing toxic compositions in agricultural products, especially combined with suitable chemometrics.
A new weak Galerkin finite element method for elliptic interface problems
Mu, Lin; Wang, Junping; Ye, Xiu; ...
2016-08-26
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
A new weak Galerkin finite element method for elliptic interface problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
Mawson, Deborah H; Jeffrey, Keon L; Teale, Philip; Grace, Philip B
2018-06-19
A rapid, accurate and robust method for the determination of catechin (C), epicatechin (EC), gallocatechin (GC), epigallocatechin (EGC), catechin gallate (Cg), epicatechin gallate (ECg), gallocatechin gallate (GCg) and epigallocatechin gallate (EGCg) concentrations in human plasma has been developed. The method utilises protein precipitation following enzyme hydrolysis, with chromatographic separation and detection using reversed-phase liquid chromatography - tandem mass spectrometry (LC-MS/MS). Traditional issues such as lengthy chromatographic run times, sample and extract stability, and lack of suitable internal standards have been addressed. The method has been evaluated using a comprehensive validation procedure, confirming linearity over appropriate concentration ranges, and inter/intra batch precision and accuracies within suitable thresholds (precisions within 13.8% and accuracies within 12.4%). Recoveries of analytes were found to be consistent between different matrix samples, compensated for using suitable internal markers and within the performance of the instrumentation used. Similarly, chromatographic interferences have been corrected using the internal markers selected. Stability of all analytes in matrix is demonstrated over 32 days and throughout extraction conditions. This method is suitable for high throughput sample analysis studies. This article is protected by copyright. All rights reserved.
Wang, Shunhai; Bobst, Cedric E.; Kaltashov, Igor A.
2018-01-01
Transferrin (Tf) is an 80 kDa iron-binding protein which is viewed as a promising drug carrier to target the central nervous system due to its ability to penetrate the blood-brain barrier (BBB). Among the many challenges during the development of Tf-based therapeutics, sensitive and accurate quantitation of the administered Tf in cerebrospinal fluid (CSF) remains particularly difficult due to the presence of abundant endogenous Tf. Herein, we describe the development of a new LC-MS based method for sensitive and accurate quantitation of exogenous recombinant human Tf in rat CSF. By taking advantage of a His-tag present in recombinant Tf and applying Ni affinity purification, the exogenous hTf can be greatly enriched from rat CSF, despite the presence of the abundant endogenous protein. Additionally, we applied a newly developed O18-labeling technique that can generate internal standards at the protein level, which greatly improved the accuracy and robustness of quantitation. The developed method was investigated for linearity, accuracy, precision and lower limit of quantitation, all of which met the commonly accepted criteria for bioanalytical method validation. PMID:26307718
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-02-23
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Genomic selection in sugar beet breeding populations.
Würschum, Tobias; Reif, Jochen C; Kraft, Thomas; Janssen, Geert; Zhao, Yusheng
2013-09-18
Genomic selection exploits dense genome-wide marker data to predict breeding values. In this study we used a large sugar beet population of 924 lines representing different germplasm types present in breeding populations: unselected segregating families and diverse lines from more advanced stages of selection. All lines have been intensively phenotyped in multi-location field trials for six agronomically important traits and genotyped with 677 SNP markers. We used ridge regression best linear unbiased prediction in combination with fivefold cross-validation and obtained high prediction accuracies for all except one trait. In addition, we investigated whether a calibration developed based on a training population composed of diverse lines is suited to predict the phenotypic performance within families. Our results show that the prediction accuracy is lower than that obtained within the diverse set of lines, but comparable to that obtained by cross-validation within the respective families. The results presented in this study suggest that a training population derived from intensively phenotyped and genotyped diverse lines from a breeding program does hold potential to build up robust calibration models for genomic selection. Taken together, our results indicate that genomic selection is a valuable tool and can thus complement the genomics toolbox in sugar beet breeding.
Investigation of ODE integrators using interactive graphics. [Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Brown, R. L.
1978-01-01
Two FORTRAN programs using an interactive graphic terminal to generate accuracy and stability plots for given multistep ordinary differential equation (ODE) integrators are described. The first treats the fixed stepsize linear case with complex variable solutions, and generates plots to show accuracy and error response to step driving function of a numerical solution, as well as the linear stability region. The second generates an analog to the stability region for classes of non-linear ODE's as well as accuracy plots. Both systems can compute method coefficients from a simple specification of the method. Example plots are given.
NASA Astrophysics Data System (ADS)
Lee, Joohwi; Kim, Sun Hyung; Styner, Martin
2016-03-01
The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
Talluri, Murali V N Kumar; Kalariya, Pradipbhai D; Dharavath, Shireesha; Shaikh, Naeem; Garg, Prabha; Ramisetti, Nageswara Rao; Ragampeta, Srinivas
2016-09-01
A novel ultra high performance liquid chromatography method development strategy was ameliorated by applying quality by design approach. The developed systematic approach was divided into five steps (i) Analytical Target Profile, (ii) Critical Quality Attributes, (iii) Risk Assessments of Critical parameters using design of experiments (screening and optimization phases), (iv) Generation of design space, and (v) Process Capability Analysis (Cp) for robustness study using Monte Carlo simulation. The complete quality-by-design-based method development was made automated and expedited by employing sub-2 μm particles column with an ultra high performance liquid chromatography system. Successful chromatographic separation of the Coenzyme Q10 from its biotechnological process related impurities was achieved on a Waters Acquity phenyl hexyl (100 mm × 2.1 mm, 1.7 μm) column with gradient elution of 10 mM ammonium acetate buffer (pH 4.0) and a mixture of acetonitrile/2-propanol (1:1) as the mobile phase. Through this study, fast and organized method development workflow was developed and robustness of the method was also demonstrated. The method was validated for specificity, linearity, accuracy, precision, and robustness in compliance to the International Conference on Harmonization, Q2 (R1) guidelines. The impurities were identified by atmospheric pressure chemical ionization-mass spectrometry technique. Further, the in silico toxicity of impurities was analyzed using TOPKAT and DEREK software. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dingari, Narahara Chari; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P; Kumar Gundawar, Manoj
2012-03-20
Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real-world applications, e.g., quality assurance and process monitoring. Specifically, variability in sample, system, and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a nonlinear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that the application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), because of its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data-highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples, as well as in related areas of forensic and biological sample analysis.
Bourjaily, Mark A.
2012-01-01
Animals must often make opposing responses to similar complex stimuli. Multiple sensory inputs from such stimuli combine to produce stimulus-specific patterns of neural activity. It is the differences between these activity patterns, even when small, that provide the basis for any differences in behavioral response. In the present study, we investigate three tasks with differing degrees of overlap in the inputs, each with just two response possibilities. We simulate behavioral output via winner-takes-all activity in one of two pools of neurons forming a biologically based decision-making layer. The decision-making layer receives inputs either in a direct stimulus-dependent manner or via an intervening recurrent network of neurons that form the associative layer, whose activity helps distinguish the stimuli of each task. We show that synaptic facilitation of synapses to the decision-making layer improves performance in these tasks, robustly increasing accuracy and speed of responses across multiple configurations of network inputs. Conversely, we find that synaptic depression worsens performance. In a linearly nonseparable task with exclusive-or logic, the benefit of synaptic facilitation lies in its superlinear transmission: effective synaptic strength increases with presynaptic firing rate, which enhances the already present superlinearity of presynaptic firing rate as a function of stimulus-dependent input. In linearly separable single-stimulus discrimination tasks, we find that facilitating synapses are always beneficial because synaptic facilitation always enhances any differences between inputs. Thus we predict that for optimal decision-making accuracy and speed, synapses from sensory or associative areas to decision-making or premotor areas should be facilitating. PMID:22457467
Smith, Jason F.; Chen, Kewei; Pillai, Ajay S.; Horwitz, Barry
2013-01-01
The number and variety of connectivity estimation methods is likely to continue to grow over the coming decade. Comparisons between methods are necessary to prune this growth to only the most accurate and robust methods. However, the nature of connectivity is elusive with different methods potentially attempting to identify different aspects of connectivity. Commonalities of connectivity definitions across methods upon which base direct comparisons can be difficult to derive. Here, we explicitly define “effective connectivity” using a common set of observation and state equations that are appropriate for three connectivity methods: dynamic causal modeling (DCM), multivariate autoregressive modeling (MAR), and switching linear dynamic systems for fMRI (sLDSf). In addition while deriving this set, we show how many other popular functional and effective connectivity methods are actually simplifications of these equations. We discuss implications of these connections for the practice of using one method to simulate data for another method. After mathematically connecting the three effective connectivity methods, simulated fMRI data with varying numbers of regions and task conditions is generated from the common equation. This simulated data explicitly contains the type of the connectivity that the three models were intended to identify. Each method is applied to the simulated data sets and the accuracy of parameter identification is analyzed. All methods perform above chance levels at identifying correct connectivity parameters. The sLDSf method was superior in parameter estimation accuracy to both DCM and MAR for all types of comparisons. PMID:23717258
Belal, Tarek S; El-Kafrawy, Dina S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H
2016-02-15
This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415nm. The fourth method involves the formation of a yellow complex peaking at 361nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Belal, Tarek S.; El-Kafrawy, Dina S.; Mahrous, Mohamed S.; Abdel-Khalek, Magdi M.; Abo-Gharam, Amira H.
2016-02-01
This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524 nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490 nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415 nm. The fourth method involves the formation of a yellow complex peaking at 361 nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8 μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method.
Robust Weak Chimeras in Oscillator Networks with Delayed Linear and Quadratic Interactions
NASA Astrophysics Data System (ADS)
Bick, Christian; Sebek, Michael; Kiss, István Z.
2017-10-01
We present an approach to generate chimera dynamics (localized frequency synchrony) in oscillator networks with two populations of (at least) two elements using a general method based on a delayed interaction with linear and quadratic terms. The coupling design yields robust chimeras through a phase-model-based design of the delay and the ratio of linear and quadratic components of the interactions. We demonstrate the method in the Brusselator model and experiments with electrochemical oscillators. The technique opens the way to directly bridge chimera dynamics in phase models and real-world oscillator networks.
A secure distributed logistic regression protocol for the detection of rare adverse drug events
El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat
2013-01-01
Background There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. Objective To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. Methods We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. Results The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. Conclusion The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models. PMID:22871397
A secure distributed logistic regression protocol for the detection of rare adverse drug events.
El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat
2013-05-01
There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models.
Usuda, Takashi; Kobayashi, Naoki; Takeda, Sunao; Kotake, Yoshifumi
2010-01-01
We have developed the non-invasive blood pressure monitor which can measure the blood pressure quickly and robustly. This monitor combines two measurement mode: the linear inflation and the linear deflation. On the inflation mode, we realized a faster measurement with rapid inflation rate. On the deflation mode, we realized a robust noise reduction. When there is neither noise nor arrhythmia, the inflation mode incorporated on this monitor provides precise, quick and comfortable measurement. Once the inflation mode fails to calculate appropriate blood pressure due to body movement or arrhythmia, then the monitor switches automatically to the deflation mode and measure blood pressure by using digital signal processing as wavelet analysis, filter bank, filter combined with FFT and Inverse FFT. The inflation mode succeeded 2440 measurements out of 3099 measurements (79%) in an operating room and a rehabilitation room. The new designed blood pressure monitor provides the fastest measurement for patient with normal circulation and robust measurement for patients with body movement or severe arrhythmia. Also this fast measurement method provides comfortableness for patients.
Stochastic Integration H∞ Filter for Rapid Transfer Alignment of INS.
Zhou, Dapeng; Guo, Lei
2017-11-18
The performance of an inertial navigation system (INS) operated on a moving base greatly depends on the accuracy of rapid transfer alignment (RTA). However, in practice, the coexistence of large initial attitude errors and uncertain observation noise statistics poses a great challenge for the estimation accuracy of misalignment angles. This study aims to develop a novel robust nonlinear filter, namely the stochastic integration H ∞ filter (SIH ∞ F) for improving both the accuracy and robustness of RTA. In this new nonlinear H ∞ filter, the stochastic spherical-radial integration rule is incorporated with the framework of the derivative-free H ∞ filter for the first time, and the resulting SIH ∞ F simultaneously attenuates the negative effect in estimations caused by significant nonlinearity and large uncertainty. Comparisons between the SIH ∞ F and previously well-known methodologies are carried out by means of numerical simulation and a van test. The results demonstrate that the newly-proposed method outperforms the cubature H ∞ filter. Moreover, the SIH ∞ F inherits the benefit of the traditional stochastic integration filter, but with more robustness in the presence of uncertainty.
Hybrid Upwind Splitting (HUS) by a Field-by-Field Decomposition
NASA Technical Reports Server (NTRS)
Coquel, Frederic; Liou, Meng-Sing
1995-01-01
We introduce and develop a new approach for upwind biasing: the hybrid upwind splitting (HUS) method. This original procedure is based on a suitable hybridization of current prominent flux vector splitting (FVS) and flux difference splitting (FDS) methods. The HUS method is designed to naturally combine the respective strengths of the above methods while excluding their main deficiencies. Specifically, the HUS strategy yields a family of upwind methods that exhibit the robustness of FVS schemes in the capture of nonlinear waves and the accuracy of some FDS schemes in the resolution of linear waves. We give a detailed construction of the HUS methods following a general and systematic procedure directly performed at the basic level of the field by field (i.e. waves) decomposition involved in FDS methods. For such a given decomposition, each field is endowed either with FVS or FDS numerical fluxes, depending on the nonlinear nature of the field under consideration. Such a design principle is made possible thanks to the introduction of a convenient formalism that provides us with a unified framework for upwind methods. The HUS methods we propose bring significant improvements over current methods in terms of accuracy and robustness. They yield entropy-satisfying approximate solutions as they are strongly supported in numerical experiments. Field by field hybrid numerical fluxes also achieve fairly simple and explicit expressions and hence require a computational effort between that of the FVS and FDS. Several numerical experiments ranging from stiff 1D shock-tube to high speed viscous flows problems are displayed, intending to illustrate the benefits of the present approach. We assess in particular the relevance of our HUS schemes to viscous flow calculations.
Hussain, Lal
2018-06-01
Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.
Popular song and lyrics synchronization and its application to music information retrieval
NASA Astrophysics Data System (ADS)
Chen, Kai; Gao, Sheng; Zhu, Yongwei; Sun, Qibin
2006-01-01
An automatic synchronization system of the popular song and its lyrics is presented in the paper. The system includes two main components: a) automatically detecting vocal/non-vocal in the audio signal and b) automatically aligning the acoustic signal of the song with its lyric using speech recognition techniques and positioning the boundaries of the lyrics in its acoustic realization at the multiple levels simultaneously (e.g. the word / syllable level and phrase level). The GMM models and a set of HMM-based acoustic model units are carefully designed and trained for the detection and alignment. To eliminate the severe mismatch due to the diversity of musical signal and sparse training data available, the unsupervised adaptation technique such as maximum likelihood linear regression (MLLR) is exploited for tailoring the models to the real environment, which improves robustness of the synchronization system. To further reduce the effect of the missed non-vocal music on alignment, a novel grammar net is build to direct the alignment. As we know, this is the first automatic synchronization system only based on the low-level acoustic feature such as MFCC. We evaluate the system on a Chinese song dataset collecting from 3 popular singers. We obtain 76.1% for the boundary accuracy at the syllable level (BAS) and 81.5% for the boundary accuracy at the phrase level (BAP) using fully automatic vocal/non-vocal detection and alignment. The synchronization system has many applications such as multi-modality (audio and textual) content-based popular song browsing and retrieval. Through the study, we would like to open up the discussion of some challenging problems when developing a robust synchronization system for largescale database.
Synthesis Methods for Robust Passification and Control
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.; Joshi, Suresh M. (Technical Monitor)
2000-01-01
The research effort under this cooperative agreement has been essentially the continuation of the work from previous grants. The ongoing work has primarily focused on developing passivity-based control techniques for Linear Time-Invariant (LTI) systems. During this period, there has been a significant progress made in the area of passivity-based control of LTI systems and some preliminary results have also been obtained for nonlinear systems, as well. The prior work has addressed optimal control design for inherently passive as well as non- passive linear systems. For exploiting the robustness characteristics of passivity-based controllers the passification methodology was developed for LTI systems that are not inherently passive. Various methods of passification were first proposed in and further developed. The robustness of passification was addressed for multi-input multi-output (MIMO) systems for certain classes of uncertainties using frequency-domain methods. For MIMO systems, a state-space approach using Linear Matrix Inequality (LMI)-based formulation was presented, for passification of non-passive LTI systems. An LMI-based robust passification technique was presented for systems with redundant actuators and sensors. The redundancy in actuators and sensors was used effectively for robust passification using the LMI formulation. The passification was designed to be robust to an interval-type uncertainties in system parameters. The passification techniques were used to design a robust controller for Benchmark Active Control Technology wing under parametric uncertainties. The results on passive nonlinear systems, however, are very limited to date. Our recent work in this area was presented, wherein some stability results were obtained for passive nonlinear systems that are affine in control.
NASA Technical Reports Server (NTRS)
Nett, C. N.; Jacobson, C. A.; Balas, M. J.
1983-01-01
This paper reviews and extends the fractional representation theory. In particular, new and powerful robustness results are presented. This new theory is utilized to develop a preliminary design methodology for finite dimensional control of a class of linear evolution equations on a Banach space. The design is for stability in an input-output sense, but particular attention is paid to internal stability as well.
ERIC Educational Resources Information Center
Deng, Nina
2011-01-01
Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…
NASA Technical Reports Server (NTRS)
White, Jeffery A.; Baurle, Robert A.; Passe, Bradley J.; Spiegel, Seth C.; Nishikawa, Hiroaki
2017-01-01
The ability to solve the equations governing the hypersonic turbulent flow of a real gas on unstructured grids using a spatially-elliptic, 2nd-order accurate, cell-centered, finite-volume method has been recently implemented in the VULCAN-CFD code. This paper describes the key numerical methods and techniques that were found to be required to robustly obtain accurate solutions to hypersonic flows on non-hex-dominant unstructured grids. The methods and techniques described include: an augmented stencil, weighted linear least squares, cell-average gradient method, a robust multidimensional cell-average gradient-limiter process that is consistent with the augmented stencil of the cell-average gradient method and a cell-face gradient method that contains a cell skewness sensitive damping term derived using hyperbolic diffusion based concepts. A data-parallel matrix-based symmetric Gauss-Seidel point-implicit scheme, used to solve the governing equations, is described and shown to be more robust and efficient than a matrix-free alternative. In addition, a y+ adaptive turbulent wall boundary condition methodology is presented. This boundary condition methodology is deigned to automatically switch between a solve-to-the-wall and a wall-matching-function boundary condition based on the local y+ of the 1st cell center off the wall. The aforementioned methods and techniques are then applied to a series of hypersonic and supersonic turbulent flat plate unit tests to examine the efficiency, robustness and convergence behavior of the implicit scheme and to determine the ability of the solve-to-the-wall and y+ adaptive turbulent wall boundary conditions to reproduce the turbulent law-of-the-wall. Finally, the thermally perfect, chemically frozen, Mach 7.8 turbulent flow of air through a scramjet flow-path is computed and compared with experimental data to demonstrate the robustness, accuracy and convergence behavior of the unstructured-grid solver for a realistic 3-D geometry on a non-hex-dominant grid.
A high-accuracy optical linear algebra processor for finite element applications
NASA Technical Reports Server (NTRS)
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
Altman, Michael D.; Bardhan, Jaydeep P.; White, Jacob K.; Tidor, Bruce
2009-01-01
We present a boundary-element method (BEM) implementation for accurately solving problems in biomolecular electrostatics using the linearized Poisson–Boltzmann equation. Motivating this implementation is the desire to create a solver capable of precisely describing the geometries and topologies prevalent in continuum models of biological molecules. This implementation is enabled by the synthesis of four technologies developed or implemented specifically for this work. First, molecular and accessible surfaces used to describe dielectric and ion-exclusion boundaries were discretized with curved boundary elements that faithfully reproduce molecular geometries. Second, we avoided explicitly forming the dense BEM matrices and instead solved the linear systems with a preconditioned iterative method (GMRES), using a matrix compression algorithm (FFTSVD) to accelerate matrix-vector multiplication. Third, robust numerical integration methods were employed to accurately evaluate singular and near-singular integrals over the curved boundary elements. Finally, we present a general boundary-integral approach capable of modeling an arbitrary number of embedded homogeneous dielectric regions with differing dielectric constants, possible salt treatment, and point charges. A comparison of the presented BEM implementation and standard finite-difference techniques demonstrates that for certain classes of electrostatic calculations, such as determining absolute electrostatic solvation and rigid-binding free energies, the improved convergence properties of the BEM approach can have a significant impact on computed energetics. We also demonstrate that the improved accuracy offered by the curved-element BEM is important when more sophisticated techniques, such as non-rigid-binding models, are used to compute the relative electrostatic effects of molecular modifications. In addition, we show that electrostatic calculations requiring multiple solves using the same molecular geometry, such as charge optimization or component analysis, can be computed to high accuracy using the presented BEM approach, in compute times comparable to traditional finite-difference methods. PMID:18567005
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Lv, Youbin; Wang, Hong
Optimal operation of a practical blast furnace (BF) ironmaking process depends largely on a good measurement of molten iron quality (MIQ) indices. However, measuring the MIQ online is not feasible using the available techniques. In this paper, a novel data-driven robust modeling is proposed for online estimation of MIQ using improved random vector functional-link networks (RVFLNs). Since the output weights of traditional RVFLNs are obtained by the least squares approach, a robustness problem may occur when the training dataset is contaminated with outliers. This affects the modeling accuracy of RVFLNs. To solve this problem, a Cauchy distribution weighted M-estimation basedmore » robust RFVLNs is proposed. Since the weights of different outlier data are properly determined by the Cauchy distribution, their corresponding contribution on modeling can be properly distinguished. Thus robust and better modeling results can be achieved. Moreover, given that the BF is a complex nonlinear system with numerous coupling variables, the data-driven canonical correlation analysis is employed to identify the most influential components from multitudinous factors that affect the MIQ indices to reduce the model dimension. Finally, experiments using industrial data and comparative studies have demonstrated that the obtained model produces a better modeling and estimating accuracy and stronger robustness than other modeling methods.« less
Direction-aware Slope Limiter for 3D Cubic Grids with Adaptive Mesh Refinement
Velechovsky, Jan; Francois, Marianne M.; Masser, Thomas
2018-06-07
In the context of finite volume methods for hyperbolic systems of conservation laws, slope limiters are an effective way to suppress creation of unphysical local extrema and/or oscillations near discontinuities. We investigate properties of these limiters as applied to piecewise linear reconstructions of conservative fluid quantities in three-dimensional simulations. In particular, we are interested in linear reconstructions on Cartesian adaptively refined meshes, where a reconstructed fluid quantity at a face center depends on more than a single gradient component of the quantity. We design a new slope limiter, which combines the robustness of a minmod limiter with the accuracy ofmore » a van Leer limiter. The limiter is called Direction-Aware Limiter (DAL), because the combination is based on a principal flow direction. In particular, DAL is useful in situations where the Barth–Jespersen limiter for general meshes fails to maintain global linear functions, such as on cubic computational meshes with stencils including only faceneighboring cells. Here, we verify the new slope limiter on a suite of standard hydrodynamic test problems on Cartesian adaptively refined meshes. Lastly, we demonstrate reduced mesh imprinting; for radially symmetric problems such as the Sedov blast wave or the Noh implosion test cases, the results with DAL show better preservation of radial symmetry compared to the other standard methods on Cartesian meshes.« less
Method for hue plane preserving color correction.
Mackiewicz, Michal; Andersen, Casper F; Finlayson, Graham
2016-11-01
Hue plane preserving color correction (HPPCC), introduced by Andersen and Hardeberg [Proceedings of the 13th Color and Imaging Conference (CIC) (2005), pp. 141-146], maps device-dependent color values (RGB) to colorimetric color values (XYZ) using a set of linear transforms, realized by white point preserving 3×3 matrices, where each transform is learned and applied in a subregion of color space, defined by two adjacent hue planes. The hue plane delimited subregions of camera RGB values are mapped to corresponding hue plane delimited subregions of estimated colorimetric XYZ values. Hue planes are geometrical half-planes, where each is defined by the neutral axis and a chromatic color in a linear color space. The key advantage of the HPPCC method is that, while offering an estimation accuracy of higher order methods, it maintains the linear colorimetric relations of colors in hue planes. As a significant result, it therefore also renders the colorimetric estimates invariant to exposure and shading of object reflection. In this paper, we present a new flexible and robust version of HPPCC using constrained least squares in the optimization, where the subregions can be chosen freely in number and position in order to optimize the results while constraining transform continuity at the subregion boundaries. The method is compared to a selection of other state-of-the-art characterization methods, and the results show that it outperforms the original HPPCC method.
Párta, László; Zalai, Dénes; Borbély, Sándor; Putics, Akos
2014-02-01
The application of dielectric spectroscopy was frequently investigated as an on-line cell culture monitoring tool; however, it still requires supportive data and experience in order to become a robust technique. In this study, dielectric spectroscopy was used to predict viable cell density (VCD) at industrially relevant high levels in concentrated fed-batch culture of Chinese hamster ovary cells producing a monoclonal antibody for pharmaceutical purposes. For on-line dielectric spectroscopy measurements, capacitance was scanned within a wide range of frequency values (100-19,490 kHz) in six parallel cell cultivation batches. Prior to detailed mathematical analysis of the collected data, principal component analysis (PCA) was applied to compare dielectric behavior of the cultivations. PCA analysis resulted in detecting measurement disturbances. By using the measured spectroscopic data, partial least squares regression (PLS), Cole-Cole, and linear modeling were applied and compared in order to predict VCD. The Cole-Cole and the PLS model provided reliable prediction over the entire cultivation including both the early and decline phases of cell growth, while the linear model failed to estimate VCD in the later, declining cultivation phase. In regards to the measurement error sensitivity, remarkable differences were shown among PLS, Cole-Cole, and linear modeling. VCD prediction accuracy could be improved in the runs with measurement disturbances by first derivative pre-treatment in PLS and by parameter optimization of the Cole-Cole modeling.
Direction-aware Slope Limiter for 3D Cubic Grids with Adaptive Mesh Refinement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velechovsky, Jan; Francois, Marianne M.; Masser, Thomas
In the context of finite volume methods for hyperbolic systems of conservation laws, slope limiters are an effective way to suppress creation of unphysical local extrema and/or oscillations near discontinuities. We investigate properties of these limiters as applied to piecewise linear reconstructions of conservative fluid quantities in three-dimensional simulations. In particular, we are interested in linear reconstructions on Cartesian adaptively refined meshes, where a reconstructed fluid quantity at a face center depends on more than a single gradient component of the quantity. We design a new slope limiter, which combines the robustness of a minmod limiter with the accuracy ofmore » a van Leer limiter. The limiter is called Direction-Aware Limiter (DAL), because the combination is based on a principal flow direction. In particular, DAL is useful in situations where the Barth–Jespersen limiter for general meshes fails to maintain global linear functions, such as on cubic computational meshes with stencils including only faceneighboring cells. Here, we verify the new slope limiter on a suite of standard hydrodynamic test problems on Cartesian adaptively refined meshes. Lastly, we demonstrate reduced mesh imprinting; for radially symmetric problems such as the Sedov blast wave or the Noh implosion test cases, the results with DAL show better preservation of radial symmetry compared to the other standard methods on Cartesian meshes.« less
SPEX: the Spectropolarimeter for Planetary Exploration
NASA Astrophysics Data System (ADS)
Rietjens, J. H. H.; Snik, F.; Stam, D. M.; Smit, J. M.; van Harten, G.; Keller, C. U.; Verlaan, A. L.; Laan, E. C.; ter Horst, R.; Navarro, R.; Wielinga, K.; Moon, S. G.; Voors, R.
2017-11-01
We present SPEX, the Spectropolarimeter for Planetary Exploration, which is a compact, robust and low-mass spectropolarimeter designed to operate from an orbiting or in situ platform. Its purpose is to simultaneously measure the radiance and the state (degree and angle) of linear polarization of sunlight that has been scattered in a planetary atmosphere and/or reflected by a planetary surface with high accuracy. The degree of linear polarization is extremely sensitive to the microphysical properties of atmospheric or surface particles (such as size, shape, and composition), and to the vertical distribution of atmospheric particles, such as cloud top altitudes. Measurements as those performed by SPEX are therefore crucial and often the only tool for disentangling the many parameters that describe planetary atmospheres and surfaces. SPEX uses a novel, passive method for its radiance and polarization observations that is based on a carefully selected combination of polarization optics. This method, called spectral modulation, is the modulation of the radiance spectrum in both amplitude and phase by the degree and angle of linear polarization, respectively. The polarization optics consists of an achromatic quarter-wave retarder, an athermal multiple-order retarder, and a polarizing beam splitter. We will show first results obtained with the recently developed prototype of the SPEX instrument, and present a performance analysis based on a dedicated vector radiative transport model together with a recently developed SPEX instrument simulator.
Comparison of interpolation functions to improve a rebinning-free CT-reconstruction algorithm.
de las Heras, Hugo; Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph
2008-01-01
The robust algorithm OPED for the reconstruction of images from Radon data has been recently developed. This reconstructs an image from parallel data within a special scanning geometry that does not need rebinning but only a simple re-ordering, so that the acquired fan data can be used directly for the reconstruction. However, if the number of rays per fan view is increased, there appear empty cells in the sinogram. These cells need to be filled by interpolation before the reconstruction can be carried out. The present paper analyzes linear interpolation, cubic splines and parametric (or "damped") splines for the interpolation task. The reconstruction accuracy in the resulting images was measured by the Normalized Mean Square Error (NMSE), the Hilbert Angle, and the Mean Relative Error. The spatial resolution was measured by the Modulation Transfer Function (MTF). Cubic splines were confirmed to be the most recommendable method. The reconstructed images resulting from cubic spline interpolation show a significantly lower NMSE than the ones from linear interpolation and have the largest MTF for all frequencies. Parametric splines proved to be advantageous only for small sinograms (below 50 fan views).
Two-component dark-bright solitons in three-dimensional atomic Bose-Einstein condensates.
Wang, Wenlong; Kevrekidis, P G
2017-03-01
In the present work, we revisit two-component Bose-Einstein condensates in their fully three-dimensional (3D) form. Motivated by earlier studies of dark-bright solitons in the 1D case, we explore the stability of these structures in their fully 3D form in two variants. In one the dark soliton is planar and trapping a planar bright (disk) soliton. In the other case, a dark spherical shell soliton creates an effective potential in which a bright spherical shell of atoms is trapped in the second component. We identify these solutions as numerically exact states (up to a prescribed accuracy) and perform a Bogolyubov-de Gennes linearization analysis that illustrates that both structures can be dynamically stable in suitable intervals of sufficiently low chemical potentials. We corroborate this finding theoretically by analyzing the stability via degenerate perturbation theory near the linear limit of the system. When the solitary waves are found to be unstable, we explore their dynamical evolution via direct numerical simulations which, in turn, reveal wave forms that are more robust. Finally, using the SO(2) symmetry of the model, we produce multi-dark-bright planar or shell solitons involved in pairwise oscillatory motion.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents.
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-13
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-01
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797
NASA Astrophysics Data System (ADS)
Chinowsky, Timothy M.; Yee, Sinclair S.
2002-02-01
Surface plasmon resonance (SPR) affinity sensing, the problem of bulk refractive index (RI) interference in SPR sensing, and a sensor developed to overcome this problem are briefly reviewed. The sensor uses a design based on Texas Instruments' Spreeta SPR sensor to simultaneously measure both bulk and surface RI. The bulk RI measurement is then used to compensate the surface measurement and remove the effects of bulk RI interference. To achieve accurate compensation, robust data analysis and calibration techniques are necessary. Simple linear data analysis techniques derived from measurements of the sensor response were found to provide a versatile, low noise method for extracting measurements of bulk and surface refractive index from the raw sensor data. Automatic calibration using RI gradients was used to correct the linear estimates, enabling the sensor to produce accurate data even when the sensor has a complicated nonlinear response which varies with time. The calibration procedure is described, and the factors influencing calibration accuracy are discussed. Data analysis and calibration principles are illustrated with an experiment in which sucrose and detergent solutions are used to produce changes in bulk and surface RI, respectively.
Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A.; Raut, Rahul P.; Choudhari, Vishnu P.; Kuchekar, Bhanudas S.
2011-01-01
Aim: To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. Materials and Methods: The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. Results: The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60–360 ng/band for LOR and 30–180 ng/band for THIO with correlation coefficients r2 = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7–101.2 %. Conclusion: The proposed method was optimized and validated as per the ICH guidelines. PMID:23781452
Raju, K V S N; Pavan Kumar, K S R; Siva Krishna, N; Madhava Reddy, P; Sreenivas, N; Kumar Sharma, Hemant; Himabindu, G; Annapurna, N
2016-01-01
A capillary gas chromatography method with a short run time, using a flame ionization detector, has been developed for the quantitative determination of trace level analysis of mesityl oxide and diacetone alcohol in the atazanavir sulfate drug substance. The chromatographic method was achieved on a fused silica capillary column coated with 5% diphenyl and 95% dimethyl polysiloxane stationary phase (Rtx-5, 30 m x 0.53 mm x 5.0 µm). The run time was 20 min employing programmed temperature with a split mode (1:5) and was validated for specificity, sensitivity, precision, linearity, and accuracy. The detection and quantitation limits obtained for mesityl oxide and diacetone alcohol were 5 µg/g and 10 µg/g, respectively, for both of the analytes. The method was found to be linear in the range between 10 µg/g and 150 µg/g with a correlation coefficient greater than 0.999, and the average recoveries obtained in atazanavir sulfate were between 102.0% and 103.7%, respectively, for mesityl oxide and diacetone alcohol. The developed method was found to be robust and rugged. The detailed experimental results are discussed in this research paper.
Pursiainen, S; Vorwerk, J; Wolters, C H
2016-12-21
The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.
Frank, Oliver; Kreissl, Johanna Karoline; Daschner, Andreas; Hofmann, Thomas
2014-03-26
A fast and precise proton nuclear magnetic resonance (qHNMR) method for the quantitative determination of low molecular weight target molecules in reference materials and natural isolates has been validated using ERETIC 2 (Electronic REference To access In vivo Concentrations) based on the PULCON (PULse length based CONcentration determination) methodology and compared to the gravimetric results. Using an Avance III NMR spectrometer (400 MHz) equipped with a broad band observe (BBO) probe, the qHNMR method was validated by determining its linearity, range, precision, and accuracy as well as robustness and limit of quantitation. The linearity of the method was assessed by measuring samples of l-tyrosine, caffeine, or benzoic acid in a concentration range between 0.3 and 16.5 mmol/L (r(2) ≥ 0.99), whereas the interday and intraday precisions were found to be ≤2%. The recovery of a range of reference compounds was ≥98.5%, thus demonstrating the qHNMR method as a precise tool for the rapid quantitation (~15 min) of food-related target compounds in reference materials and natural isolates such as nucleotides, polyphenols, or cyclic peptides.
Singh, C L; Singh, A; Kumar, S; Kumar, M; Sharma, P K; Majumdar, D K
2015-01-01
In the present study a simple, accurate, precise, economical and specific UV-spectrophotometric method for estimation of besifloxacin in bulk and in different pharmaceutical formulation has been developed. The drug shows maximum λmax289 nm in distilled water, simulated tears and phosphate buffer saline. The linearity range of developed methods were in the range of 3-30 μg/ml of drug with a correlation coefficient (r(2)) 0.9992, 0.9989 and 0.9984 with respect to distilled water, simulated tears and phosphate buffer saline, respectively. Reproducibility by repeating methods as %RSD were found to be less than 2%. The limit of detection in different media was found to be 0.62, 0.72 and 0.88 μg/ml, respectively. The limit of quantification was found to be 1.88, 2.10, 2.60 μg/ml, respectively. The proposed method was validated statically according to International Conference on Harmonization guidelines with respect to specificity, linearity, range, accuracy, precision and robustness. The proposed methods of validation were found to be accurate and highly specific for the estimation of besifloxacin in different pharmaceutical formulations.
Rebouças, Camila Tavares; Kogawa, Ana Carolina; Salgado, Hérida Regina Nunes
2018-05-18
Background: A green analytical chemistry method was developed for quantification of enrofloxacin in tablets. The drug, a second-generation fluoroquinolone, was first introduced in veterinary medicine for the treatment of various bacterial species. Objective: This study proposed to develop, validate, and apply a reliable, low-cost, fast, and simple IR spectroscopy method for quantitative routine determination of enrofloxacin in tablets. Methods: The method was completely validated according to the International Conference on Harmonisation guidelines, showing accuracy, precision, selectivity, robustness, and linearity. Results: It was linear over the concentration range of 1.0-3.0 mg with correlation coefficients >0.9999 and LOD and LOQ of 0.12 and 0.36 mg, respectively. Conclusions: Now that this IR method has met performance qualifications, it can be adopted and applied for the analysis of enrofloxacin tablets for production process control. The validated method can also be utilized to quantify enrofloxacin in tablets and thus is an environmentally friendly alternative for the routine analysis of enrofloxacin in quality control. Highlights: A new green method for the quantitative analysis of enrofloxacin by Fourier-Transform Infrared spectroscopy was validated. It is a fast, clean and low-cost alternative for the evaluation of enrofloxacin tablets.
Robust control of combustion instabilities
NASA Astrophysics Data System (ADS)
Hong, Boe-Shong
Several interactive dynamical subsystems, each of which has its own time-scale and physical significance, are decomposed to build a feedback-controlled combustion- fluid robust dynamics. On the fast-time scale, the phenomenon of combustion instability is corresponding to the internal feedback of two subsystems: acoustic dynamics and flame dynamics, which are parametrically dependent on the slow-time-scale mean-flow dynamics controlled for global performance by a mean-flow controller. This dissertation constructs such a control system, through modeling, analysis and synthesis, to deal with model uncertainties, environmental noises and time- varying mean-flow operation. Conservation law is decomposed as fast-time acoustic dynamics and slow-time mean-flow dynamics, served for synthesizing LPV (linear parameter varying)- L2-gain robust control law, in which a robust observer is embedded for estimating and controlling the internal status, while achieving trade- offs among robustness, performances and operation. The robust controller is formulated as two LPV-type Linear Matrix Inequalities (LMIs), whose numerical solver is developed by finite-element method. Some important issues related to physical understanding and engineering application are discussed in simulated results of the control system.
Chen, Huipeng; Li, Mengyuan; Zhang, Yi; Xie, Huikai; Chen, Chang; Peng, Zhangming; Su, Shaohui
2018-02-08
Incorporating linear-scanning micro-electro-mechanical systems (MEMS) micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS) possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID) closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM) spectral linewidth of 96 cm -1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions.
Li, Mengyuan; Zhang, Yi; Chen, Chang; Peng, Zhangming; Su, Shaohui
2018-01-01
Incorporating linear-scanning micro-electro-mechanical systems (MEMS) micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS) possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID) closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM) spectral linewidth of 96 cm−1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions. PMID:29419765
NASA Astrophysics Data System (ADS)
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
Simplified paraboloid phase model-based phase tracker for demodulation of a single complex fringe.
He, A; Deepan, B; Quan, C
2017-09-01
A regularized phase tracker (RPT) is an effective method for demodulation of single closed-fringe patterns. However, lengthy calculation time, specially designed scanning strategy, and sign-ambiguity problems caused by noise and saddle points reduce its effectiveness, especially for demodulating large and complex fringe patterns. In this paper, a simplified paraboloid phase model-based regularized phase tracker (SPRPT) is proposed. In SPRPT, first and second phase derivatives are pre-determined by the density-direction-combined method and discrete higher-order demodulation algorithm, respectively. Hence, cost function is effectively simplified to reduce the computation time significantly. Moreover, pre-determined phase derivatives improve the robustness of the demodulation of closed, complex fringe patterns. Thus, no specifically designed scanning strategy is needed; nevertheless, it is robust against the sign-ambiguity problem. The paraboloid phase model also assures better accuracy and robustness against noise. Both the simulated and experimental fringe patterns (obtained using electronic speckle pattern interferometry) are used to validate the proposed method, and a comparison of the proposed method with existing RPT methods is carried out. The simulation results show that the proposed method has achieved the highest accuracy with less computational time. The experimental result proves the robustness and the accuracy of the proposed method for demodulation of noisy fringe patterns and its feasibility for static and dynamic applications.
On Motion Planning and Control of Multi-Link Lightweight Robotic Manipulators
NASA Technical Reports Server (NTRS)
Cetinkunt, Sabri
1987-01-01
A general gross and fine motion planning and control strategy is needed for lightweight robotic manipulator applications such as painting, welding, material handling, surface finishing, and spacecraft servicing. The control problem of lightweight manipulators is to perform fast, accurate, and robust motions despite the payload variations, structural flexibility, and other environmental disturbances. Performance of the rigid manipulator model based computed torque and decoupled joint control methods are determined and simulated for the counterpart flexible manipulators. A counterpart flexible manipulator is defined as a manipulator which has structural flexibility, in addition to having the same inertial, geometric, and actuation properties of a given rigid manipulator. An adaptive model following control (AMFC) algorithm is developed to improve the performance in speed, accuracy, and robustness. It is found that the AMFC improves the speed performance by a factor of two over the conventional non-adaptive control methods for given accuracy requirements while proving to be more robust with respect to payload variations. Yet there are clear limitations on the performance of AMFC alone as well, which are imposed by the arm flexibility. In the search to further improve speed performance while providing a desired accuracy and robustness, a combined control strategy is developed. Furthermore, the problem of switching from one control structure to another during the motion and implementation aspects of combined control are discussed.
Robust and fast nonlinear optimization of diffusion MRI microstructure models.
Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A
2017-07-15
Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna
2018-03-01
The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
Roncali, Emilie; Phipps, Jennifer E; Marcu, Laura; Cherry, Simon R
2012-10-21
In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2×2×20 mm(3) phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors.
Roncali, Emilie; Phipps, Jennifer E.; Marcu, Laura; Cherry, Simon R.
2012-01-01
In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2 × 2 × 20 mm3 phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors. PMID:23010690
NASA Astrophysics Data System (ADS)
Xing, Yafei; Macq, Benoit
2017-11-01
With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.
Luciferase-Zinc-Finger System for the Rapid Detection of Pathogenic Bacteria.
Shi, Chu; Xu, Qing; Ge, Yue; Jiang, Ling; Huang, He
2017-08-09
Rapid and reliable detection of pathogenic bacteria is crucial for food safety control. Here, we present a novel luciferase-zinc finger system for the detection of pathogens that offers rapid and specific profiling. The system, which uses a zinc-finger protein domain to probe zinc finger recognition sites, was designed to bind the amplified conserved regions of 16S rDNA, and the obtained products were detected using a modified luciferase. The luciferase-zinc finger system not only maintained luciferase activity but also allowed the specific detection of different bacterial species, with a sensitivity as low as 10 copies and a linear range from 10 to 10 4 copies per microliter of the specific PCR product. Moreover, the system is robust and rapid, enabling the simultaneous detection of 6 species of bacteria in artificially contaminated samples with excellent accuracy. Thus, we envision that our luciferase-zinc finger system will have far-reaching applications.
Automatic PSO-Based Deformable Structures Markerless Tracking in Laparoscopic Cholecystectomy
NASA Astrophysics Data System (ADS)
Djaghloul, Haroun; Batouche, Mohammed; Jessel, Jean-Pierre
An automatic and markerless tracking method of deformable structures (digestive organs) during laparoscopic cholecystectomy intervention that uses the (PSO) behavour and the preoperative a priori knowledge is presented. The associated shape to the global best particles of the population determines a coarse representation of the targeted organ (the gallbladder) in monocular laparoscopic colored images. The swarm behavour is directed by a new fitness function to be optimized to improve the detection and tracking performance. The function is defined by a linear combination of two terms, namely, the human a priori knowledge term (H) and the particle's density term (D). Under the limits of standard (PSO) characteristics, experimental results on both synthetic and real data show the effectiveness and robustness of our method. Indeed, it outperforms existing methods without need of explicit initialization (such as active contours, deformable models and Gradient Vector Flow) on accuracy and convergence rate.
Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa
2017-03-01
This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.
Advanced fast 3D DSA model development and calibration for design technology co-optimization
NASA Astrophysics Data System (ADS)
Lai, Kafai; Meliorisz, Balint; Muelders, Thomas; Welling, Ulrich; Stock, Hans-Jürgen; Marokkey, Sajan; Demmerle, Wolfgang; Liu, Chi-Chun; Chi, Cheng; Guo, Jing
2017-04-01
Direct Optimization (DO) of a 3D DSA model is a more optimal approach to a DTCO study in terms of accuracy and speed compared to a Cahn Hilliard Equation solver. DO's shorter run time (10X to 100X faster) and linear scaling makes it scalable to the area required for a DTCO study. However, the lack of temporal data output, as opposed to prior art, requires a new calibration method. The new method involves a specific set of calibration patterns. The calibration pattern's design is extremely important when temporal data is absent to obtain robust model parameters. A model calibrated to a Hybrid DSA system with a set of device-relevant constructs indicates the effectiveness of using nontemporal data. Preliminary model prediction using programmed defects on chemo-epitaxy shows encouraging results and agree qualitatively well with theoretical predictions from a strong segregation theory.
Dubascoux, Stephane; Nicolas, Marine; Rime, Celine Fragniere; Payot, Janique Richoz; Poitevin, Eric
2015-01-01
A single-laboratory validation (SLV) is presented for the simultaneous determination of 10 ultratrace elements (UTEs) including aluminum (Al), arsenic (As), cadmium (Cd), cobalt (Co), chromium (Cr), mercury (Hg), molybdenum (Mo), lead (Pb), selenium (Se), and tin (Sn) in infant formulas, adult nutritionals, and milk based products by inductively coupled plasma (ICP)/MS after acidic pressure digestion. This robust and routine multielemental method is based on several official methods with modifications of sample preparation using either microwave digestion or high pressure ashing and of analytical conditions using ICP/MS with collision cell technology. This SLV fulfills AOAC method performance criteria in terms of linearity, specificity, sensitivity, precision, and accuracy and fully answers most international regulation limits for trace contaminants and/or recommended nutrient levels established for 10 UTEs in targeted matrixes.
Confidence limits for data mining models of options prices
NASA Astrophysics Data System (ADS)
Healy, J. V.; Dixon, M.; Read, B. J.; Cai, F. F.
2004-12-01
Non-parametric methods such as artificial neural nets can successfully model prices of financial options, out-performing the Black-Scholes analytic model (Eur. Phys. J. B 27 (2002) 219). However, the accuracy of such approaches is usually expressed only by a global fitting/error measure. This paper describes a robust method for determining prediction intervals for models derived by non-linear regression. We have demonstrated it by application to a standard synthetic example (29th Annual Conference of the IEEE Industrial Electronics Society, Special Session on Intelligent Systems, pp. 1926-1931). The method is used here to obtain prediction intervals for option prices using market data for LIFFE “ESX” FTSE 100 index options ( http://www.liffe.com/liffedata/contracts/month_onmonth.xls). We avoid special neural net architectures and use standard regression procedures to determine local error bars. The method is appropriate for target data with non constant variance (or volatility).
NASA Astrophysics Data System (ADS)
Peng, Heng; Liu, Yinghua; Chen, Haofeng
2018-05-01
In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.
Homogenization of locally resonant acoustic metamaterials towards an emergent enriched continuum.
Sridhar, A; Kouznetsova, V G; Geers, M G D
This contribution presents a novel homogenization technique for modeling heterogeneous materials with micro-inertia effects such as locally resonant acoustic metamaterials. Linear elastodynamics is used to model the micro and macro scale problems and an extended first order Computational Homogenization framework is used to establish the coupling. Craig Bampton Mode Synthesis is then applied to solve and eliminate the microscale problem, resulting in a compact closed form description of the microdynamics that accurately captures the Local Resonance phenomena. The resulting equations represent an enriched continuum in which additional kinematic degrees of freedom emerge to account for Local Resonance effects which would otherwise be absent in a classical continuum. Such an approach retains the accuracy and robustness offered by a standard Computational Homogenization implementation, whereby the problem and the computational time are reduced to the on-line solution of one scale only.
A neural-based remote eye gaze tracker under natural head motion.
Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso
2008-10-01
A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeiny, Badr A.
2011-12-01
Three simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra are developed for the simultaneous determination of Amlodipine besylate (AM) and Atorvastatin calcium (AT) in tablet dosage forms. The first method is first derivative of the ratio spectra ( 1DD), the second is ratio subtraction and the third is the method of mean centering of ratio spectra. The calibration curve is linear over the concentration range of 3-40 and 8-32 μg/ml for AM and AT, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and they are applied to commercial pharmaceutical preparation of the subjected drugs. Standard deviation is <1.5 in the assay of raw materials and tablets. Methods are validated as per ICH guidelines and accuracy, precision, repeatability and robustness are found to be within the acceptable limit.
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2013-03-01
Three simple, specific, accurate and precise spectrophotometric methods depending on the proper selection of two wavelengths are developed for the simultaneous determination of Amlodipine besylate (AML) and Atorvastatin calcium (ATV) in tablet dosage forms. The first method is the new Ratio Difference method, the second method is the Bivariate method and the third one is the Absorbance Ratio method. The calibration curve is linear over the concentration range of 4-40 and 8-32 μg/mL for AML and ATV, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and they are applied to commercial pharmaceutical preparation of the subjected drugs. Methods are validated according to the ICH guidelines and accuracy, precision, repeatability and robustness are found to be within the acceptable limit. The mathematical explanation of the procedures is illustrated.
Exact PDF equations and closure approximations for advective-reactive transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venturi, D.; Tartakovsky, Daniel M.; Tartakovsky, Alexandre M.
2013-06-01
Mathematical models of advection–reaction phenomena rely on advective flow velocity and (bio) chemical reaction rates that are notoriously random. By using functional integral methods, we derive exact evolution equations for the probability density function (PDF) of the state variables of the advection–reaction system in the presence of random transport velocity and random reaction rates with rather arbitrary distributions. These PDF equations are solved analytically for transport with deterministic flow velocity and a linear reaction rate represented mathematically by a heterog eneous and strongly-correlated random field. Our analytical solution is then used to investigate the accuracy and robustness of the recentlymore » proposed large-eddy diffusivity (LED) closure approximation [1]. We find that the solution to the LED-based PDF equation, which is exact for uncorrelated reaction rates, is accurate even in the presence of strong correlations and it provides an upper bound of predictive uncertainty.« less
Topic Identification and Categorization of Public Information in Community-Based Social Media
NASA Astrophysics Data System (ADS)
Kusumawardani, RP; Basri, MH
2017-01-01
This paper presents a work on a semi-supervised method for topic identification and classification of short texts in the social media, and its application on tweets containing dialogues in a large community of dwellers in a city, written mostly in Indonesian. These dialogues comprise a wealth of information about the city, shared in real-time. We found that despite the high irregularity of the language used, and the scarcity of suitable linguistic resources, a meaningful identification of topics could be performed by clustering the tweets using the K-Means algorithm. The resulting clusters are found to be robust enough to be the basis of a classification. On three grouping schemes derived from the clusters, we get accuracy of 95.52%, 95.51%, and 96.7 using linear SVMs, reflecting the applicability of applying this method for generating topic identification and classification on such data.
A matrix-form GSM-CFD solver for incompressible fluids and its application to hemodynamics
NASA Astrophysics Data System (ADS)
Yao, Jianyao; Liu, G. R.
2014-10-01
A GSM-CFD solver for incompressible flows is developed based on the gradient smoothing method (GSM). A matrix-form algorithm and corresponding data structure for GSM are devised to efficiently approximate the spatial gradients of field variables using the gradient smoothing operation. The calculated gradient values on various test fields show that the proposed GSM is capable of exactly reproducing linear field and of second order accuracy on all kinds of meshes. It is found that the GSM is much more robust to mesh deformation and therefore more suitable for problems with complicated geometries. Integrated with the artificial compressibility approach, the GSM is extended to solve the incompressible flows. As an example, the flow simulation of carotid bifurcation is carried out to show the effectiveness of the proposed GSM-CFD solver. The blood is modeled as incompressible Newtonian fluid and the vessel is treated as rigid wall in this paper.
Gómez Pueyo, Adrián; Marques, Miguel A L; Rubio, Angel; Castro, Alberto
2018-05-09
We examine various integration schemes for the time-dependent Kohn-Sham equations. Contrary to the time-dependent Schrödinger's equation, this set of equations is nonlinear, due to the dependence of the Hamiltonian on the electronic density. We discuss some of their exact properties, and in particular their symplectic structure. Four different families of propagators are considered, specifically the linear multistep, Runge-Kutta, exponential Runge-Kutta, and the commutator-free Magnus schemes. These have been chosen because they have been largely ignored in the past for time-dependent electronic structure calculations. The performance is analyzed in terms of cost-versus-accuracy. The clear winner, in terms of robustness, simplicity, and efficiency is a simplified version of a fourth-order commutator-free Magnus integrator. However, in some specific cases, other propagators, such as some implicit versions of the multistep methods, may be useful.
Robust Bounded Influence Tests in Linear Models
1988-11-01
sensitivity analysis and bounded influence estimation. In: Evaluation of Econometric Models, J. Kmenta and J.B. Ramsey (eds.) Academic Press, New York...1R’OBUST bOUNDED INFLUENCE TESTS IN LINEA’ MODELS and( I’homas P. [lettmansperger* Tim [PennsylvanLa State UJniversity A M i0d fix pu111 rsos.p JJ 1 0...November 1988 ROBUST BOUNDED INFLUENCE TESTS IN LINEAR MODELS Marianthi Markatou The University of Iowa and Thomas P. Hettmansperger* The Pennsylvania
Flight control application of new stability robustness bounds for linear uncertain systems
NASA Technical Reports Server (NTRS)
Yedavalli, Rama K.
1993-01-01
This paper addresses the issue of obtaining bounds on the real parameter perturbations of a linear state-space model for robust stability. Based on Kronecker algebra, new, easily computable sufficient bounds are derived that are much less conservative than the existing bounds since the technique is meant for only real parameter perturbations (in contrast to specializing complex variation case to real parameter case). The proposed theory is illustrated with application to several flight control examples.
A neural network approach to cloud classification
NASA Technical Reports Server (NTRS)
Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.
1990-01-01
It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.
Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.
Geometry-aware multiscale image registration via OBBTree-based polyaffine log-demons.
Seiler, Christof; Pennec, Xavier; Reyes, Mauricio
2011-01-01
Non-linear image registration is an important tool in many areas of image analysis. For instance, in morphometric studies of a population of brains, free-form deformations between images are analyzed to describe the structural anatomical variability. Such a simple deformation model is justified by the absence of an easy expressible prior about the shape changes. Applying the same algorithms used in brain imaging to orthopedic images might not be optimal due to the difference in the underlying prior on the inter-subject deformations. In particular, using an un-informed deformation prior often leads to local minima far from the expected solution. To improve robustness and promote anatomically meaningful deformations, we propose a locally affine and geometry-aware registration algorithm that automatically adapts to the data. We build upon the log-domain demons algorithm and introduce a new type of OBBTree-based regularization in the registration with a natural multiscale structure. The regularization model is composed of a hierarchy of locally affine transformations via their logarithms. Experiments on mandibles show improved accuracy and robustness when used to initialize the demons, and even similar performance by direct comparison to the demons, with a significantly lower degree of freedom. This closes the gap between polyaffine and non-rigid registration and opens new ways to statistically analyze the registration results.
Inugala, Ugandar Reddy; Pothuraju, Nageswara Rao; Vangala, Ranga Reddy
2013-01-01
This paper describes the development of a rapid, novel, stability-indicating gradient reversed-phase high-performance liquid chromatographic method and associated system suitability parameters for the analysis of naproxcinod in the presence of its related substances and degradents using a quality-by-design approach. All of the factors that affect the separation of naproxcinod and its impurities and their mutual interactions were investigated and robustness of the method was ensured. The method was developed using an Ascentis Express C8 150 × 4.6 mm, 2.7 µm column with a mobile phase containing a gradient mixture of two solvents. The eluted compounds were monitored at 230 nm, the run time was 20 min within which naproxcinod and its eight impurities were satisfactorily separated. Naproxcinod was subjected to the stress conditions of oxidative, acid, base, hydrolytic, thermal and photolytic degradation. Naproxcinod was found to degrade significantly in acidic and basic conditions and to be stable in thermal, photolytic, oxidative and aqueous degradation conditions. The degradation products were satisfactorily resolved from the primary peak and its impurities, proving the stability-indicating power of the method. The developed method was validated as per International Conference on Harmonization guidelines with respect to specificity, linearity, limit of detection, limit of quantification, accuracy, precision and robustness.
Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin
2016-06-27
Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.
Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin
2016-01-01
Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system. PMID:27355946
Highly scalable and robust rule learner: performance evaluation and comparison.
Kurgan, Lukasz A; Cios, Krzysztof J; Dick, Scott
2006-02-01
Business intelligence and bioinformatics applications increasingly require the mining of datasets consisting of millions of data points, or crafting real-time enterprise-level decision support systems for large corporations and drug companies. In all cases, there needs to be an underlying data mining system, and this mining system must be highly scalable. To this end, we describe a new rule learner called DataSqueezer. The learner belongs to the family of inductive supervised rule extraction algorithms. DataSqueezer is a simple, greedy, rule builder that generates a set of production rules from labeled input data. In spite of its relative simplicity, DataSqueezer is a very effective learner. The rules generated by the algorithm are compact, comprehensible, and have accuracy comparable to rules generated by other state-of-the-art rule extraction algorithms. The main advantages of DataSqueezer are very high efficiency, and missing data resistance. DataSqueezer exhibits log-linear asymptotic complexity with the number of training examples, and it is faster than other state-of-the-art rule learners. The learner is also robust to large quantities of missing data, as verified by extensive experimental comparison with the other learners. DataSqueezer is thus well suited to modern data mining and business intelligence tasks, which commonly involve huge datasets with a large fraction of missing data.
Analytical Parameters of an Amperometric Glucose Biosensor for Fast Analysis in Food Samples.
Artigues, Margalida; Abellà, Jordi; Colominas, Sergi
2017-11-14
Amperometric biosensors based on the use of glucose oxidase (GOx) are able to combine the robustness of electrochemical techniques with the specificity of biological recognition processes. However, very little information can be found in literature about the fundamental analytical parameters of these sensors. In this work, the analytical behavior of an amperometric biosensor based on the immobilization of GOx using a hydrogel (Chitosan) onto highly ordered titanium dioxide nanotube arrays (TiO₂NTAs) has been evaluated. The GOx-Chitosan/TiO₂NTAs biosensor showed a sensitivity of 5.46 μA·mM -1 with a linear range from 0.3 to 1.5 mM; its fundamental analytical parameters were studied using a commercial soft drink. The obtained results proved sufficient repeatability (RSD = 1.9%), reproducibility (RSD = 2.5%), accuracy (95-105% recovery), and robustness (RSD = 3.3%). Furthermore, no significant interferences from fructose, ascorbic acid and citric acid were obtained. In addition, the storage stability was further examined, after 30 days, the GOx-Chitosan/TiO₂NTAs biosensor retained 85% of its initial current response. Finally, the glucose content of different food samples was measured using the biosensor and compared with the respective HPLC value. In the worst scenario, a deviation smaller than 10% was obtained among the 20 samples evaluated.
A design of spectrophotometric microfluidic chip sensor for analyzing silicate in seawater
NASA Astrophysics Data System (ADS)
Cao, X.; Zhang, S. W.; Chu, D. Z.; Wu, N.; Ma, H. K.; Liu, Y.
2017-08-01
High quality and continuous in situ silicate data are required to investigate the mechanism of the biogeochemical cycles and the formation of red tide. There is an urgently growing need for autonomous in situ silicate instruments that perform determination on various platforms. However, due to the high reagents and power consumption, as well as high system complexity leading to low reliability and robustness, the performance of the commercially available silicate sensors is not satisfactory. With these problems, here we present a new generation of microfluidic continuous flow analysis silicate sensor with sufficient analytical performance and robustness, for in situ determination of soluble silicate in seawater. The reaction mechanism of this sensor is based on the reaction of silicate with ammonium molybdate to form a yellow silicomolybdate complex and further reduction to silicomoIybdenum blue by ascorbic acid. The minimum limit of detection was 45.1 nmol L-1, and the linear determination range of the sensor is 0-400 μmol L-1. The recovery rate of the actual water is between 98.1%-104.0%, and the analyzing cycle of the sensor is about 5 minutes. This sensor has the advantages of high accuracy, high integration, low water consumption, and strong anti-interference ability. It has been successfully applied to measuring the silicate in seawater in Jiaozhou Bay.
Stretch, Jonathan R; Somorjai, Ray; Bourne, Roger; Hsiao, Edward; Scolyer, Richard A; Dolenko, Brion; Thompson, John F; Mountford, Carolyn E; Lean, Cynthia L
2005-11-01
Nonsurgical assessment of sentinel nodes (SNs) would offer advantages over surgical SN excision by reducing morbidity and costs. Proton magnetic resonance spectroscopy (MRS) of fine-needle aspirate biopsy (FNAB) specimens identifies melanoma lymph node metastases. This study was undertaken to determine the accuracy of the MRS method and thereby establish a basis for the future development of a nonsurgical technique for assessing SNs. FNAB samples were obtained from 118 biopsy specimens from 77 patients during SN biopsy and regional lymphadenectomy. The specimens were histologically evaluated and correlated with MRS data. Histopathologic analysis established that 56 specimens contained metastatic melanoma and that 62 specimens were benign. A linear discriminant analysis-based classifier was developed for benign tissues and metastases. The presence of metastatic melanoma in lymph nodes was predicted with a sensitivity of 92.9%, a specificity of 90.3%, and an accuracy of 91.5% in a primary data set. In a second data set that used FNAB samples separate from the original tissue samples, melanoma metastases were predicted with a sensitivity of 87.5%, a specificity of 90.3%, and an accuracy of 89.1%, thus supporting the reproducibility of the method. Proton MRS of FNAB samples may provide a robust and accurate diagnosis of metastatic disease in the regional lymph nodes of melanoma patients. These data indicate the potential for SN staging of melanoma without surgical biopsy and histopathological evaluation.
NASA Astrophysics Data System (ADS)
Li, Zhifu; Hu, Yueming; Li, Di
2016-08-01
For a class of linear discrete-time uncertain systems, a feedback feed-forward iterative learning control (ILC) scheme is proposed, which is comprised of an iterative learning controller and two current iteration feedback controllers. The iterative learning controller is used to improve the performance along the iteration direction and the feedback controllers are used to improve the performance along the time direction. First of all, the uncertain feedback feed-forward ILC system is presented by an uncertain two-dimensional Roesser model system. Then, two robust control schemes are proposed. One can ensure that the feedback feed-forward ILC system is bounded-input bounded-output stable along time direction, and the other can ensure that the feedback feed-forward ILC system is asymptotically stable along time direction. Both schemes can guarantee the system is robust monotonically convergent along the iteration direction. Third, the robust convergent sufficient conditions are given, which contains a linear matrix inequality (LMI). Moreover, the LMI can be used to determine the gain matrix of the feedback feed-forward iterative learning controller. Finally, the simulation results are presented to demonstrate the effectiveness of the proposed schemes.
NASA Astrophysics Data System (ADS)
Al-Mayah, Adil; Moseley, Joanne; Velec, Mike; Brock, Kristy
2011-08-01
Both accuracy and efficiency are critical for the implementation of biomechanical model-based deformable registration in clinical practice. The focus of this investigation is to evaluate the potential of improving the efficiency of the deformable image registration of the human lungs without loss of accuracy. Three-dimensional finite element models have been developed using image data of 14 lung cancer patients. Each model consists of two lungs, tumor and external body. Sliding of the lungs inside the chest cavity is modeled using a frictionless surface-based contact model. The effect of the type of element, finite deformation and elasticity on the accuracy and computing time is investigated. Linear and quadrilateral tetrahedral elements are used with linear and nonlinear geometric analysis. Two types of material properties are applied namely: elastic and hyperelastic. The accuracy of each of the four models is examined using a number of anatomical landmarks representing the vessels bifurcation points distributed across the lungs. The registration error is not significantly affected by the element type or linearity of analysis, with an average vector error of around 2.8 mm. The displacement differences between linear and nonlinear analysis methods are calculated for all lungs nodes and a maximum value of 3.6 mm is found in one of the nodes near the entrance of the bronchial tree into the lungs. The 95 percentile of displacement difference ranges between 0.4 and 0.8 mm. However, the time required for the analysis is reduced from 95 min in the quadratic elements nonlinear geometry model to 3.4 min in the linear element linear geometry model. Therefore using linear tetrahedral elements with linear elastic materials and linear geometry is preferable for modeling the breathing motion of lungs for image-guided radiotherapy applications.
Effect of smoothing on robust chaos.
Deshpande, Amogh; Chen, Qingfei; Wang, Yan; Lai, Ying-Cheng; Do, Younghae
2010-08-01
In piecewise-smooth dynamical systems, situations can arise where the asymptotic attractors of the system in an open parameter interval are all chaotic (e.g., no periodic windows). This is the phenomenon of robust chaos. Previous works have established that robust chaos can occur through the mechanism of border-collision bifurcation, where border is the phase-space region where discontinuities in the derivatives of the dynamical equations occur. We investigate the effect of smoothing on robust chaos and find that periodic windows can arise when a small amount of smoothness is present. We introduce a parameter of smoothing and find that the measure of the periodic windows in the parameter space scales linearly with the parameter, regardless of the details of the smoothing function. Numerical support and a heuristic theory are provided to establish the scaling relation. Experimental evidence of periodic windows in a supposedly piecewise linear dynamical system, which has been implemented as an electronic circuit, is also provided.
Robust, nonlinear, high angle-of-attack control design for a supermaneuverable vehicle
NASA Technical Reports Server (NTRS)
Adams, Richard J.
1993-01-01
High angle-of-attack flight control laws are developed for a supermaneuverable fighter aircraft. The methods of dynamic inversion and structured singular value synthesis are combined into an approach which addresses both the nonlinearity and robustness problems of flight at extreme operating conditions. The primary purpose of the dynamic inversion control elements is to linearize the vehicle response across the flight envelope. Structured singular value synthesis is used to design a dynamic controller which provides robust tracking to pilot commands. The resulting control system achieves desired flying qualities and guarantees a large margin of robustness to uncertainties for high angle-of-attack flight conditions. The results of linear simulation and structured singular value stability analysis are presented to demonstrate satisfaction of the design criteria. High fidelity nonlinear simulation results show that the combined dynamics inversion/structured singular value synthesis control law achieves a high level of performance in a realistic environment.
Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing
2017-12-28
Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system outperforms state-of-the-art plankton image classification systems in terms of accuracy and robustness. This study demonstrated automatic plankton image classification system combining multiple view features using multiple kernel learning. The results indicated that multiple view features combined by NLMKL using three kernel functions (linear, polynomial and Gaussian kernel functions) can describe and use information of features better so that achieve a higher classification accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tikhonenkov, I.; Vardi, A.; Moore, M. G.
2011-06-15
Mach-Zehnder atom interferometry requires hold-time phase squeezing to attain readout accuracy below the standard quantum limit. This increases its sensitivity to phase diffusion, restoring shot-noise scaling of the optimal signal-to-noise ratio in the presence of interactions. The contradiction between the preparations required for readout accuracy and robustness to interactions is removed by monitoring Rabi-Josephson oscillations instead of relative-phase oscillations during signal acquisition. Optimizing the signal-to-noise ratio with a Gaussian squeezed input, we find that hold-time number squeezing satisfies both demands and that sub-shot-noise scaling is retained even for strong interactions.
Cheng, Wang-Yau; Chen, Ting-Ju; Lin, Chia-Wei; Chen, Bo-Wei; Yang, Ya-Po; Hsu, Hung Yi
2017-02-06
Robust sub-millihertz-level offset locking was achieved with a simple scheme, by which we were able to transfer the laser frequency stability and accuracy from either cesium-stabilized diode laser or comb laser to the other diode lasers who had serious frequency jitter previously. The offset lock developed in this paper played an important role in atomic two-photon spectroscopy with which record resolution and new determination on the hyperfine constants of cesium atom were achieved. A quantum-interference experiment was performed to show the improvement of light coherence as an extended design was implemented.
Vehicle logo recognition using multi-level fusion model
NASA Astrophysics Data System (ADS)
Ming, Wei; Xiao, Jianli
2018-04-01
Vehicle logo recognition plays an important role in manufacturer identification and vehicle recognition. This paper proposes a new vehicle logo recognition algorithm. It has a hierarchical framework, which consists of two fusion levels. At the first level, a feature fusion model is employed to map the original features to a higher dimension feature space. In this space, the vehicle logos become more recognizable. At the second level, a weighted voting strategy is proposed to promote the accuracy and the robustness of the recognition results. To evaluate the performance of the proposed algorithm, extensive experiments are performed, which demonstrate that the proposed algorithm can achieve high recognition accuracy and work robustly.
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
Toward Immersed Boundary Simulation of High Reynolds Number Flows
NASA Technical Reports Server (NTRS)
Kalitzin, Georgi; Iaccarino, Gianluca
2003-01-01
In the immersed boundary (IB) method, the surface of an object is reconstructed with forcing terms in the underlying flow field equations. The surface may split a computational cell removing the constraint of the near wall gridlines to be aligned with the surface. This feature greatly simplifies the grid generation process which is cumbersome and expensive in particular for structured grids and complex geometries. The IB method is ideally suited for Cartesian flow solvers. The flow equations written in Cartesian coordinates appear in a very simple form and several numerical algorithms can be used for an efficient solution of the equations. In addition, the accuracy of numerical algorithms is dependent on the underlying grid and it usually deteriorates when the grid deviates from a Cartesian mesh. The challenge for the IB method lies in the representation of the wall boundaries and in providing an adequate near wall flow field resolution. The issue of enforcing no-slip boundary conditions at the immersed surface has been addressed by several authors by imposing a local reconstruction of the solution. Initial work by Verzicco et al. was based on a simple linear, one-dimensional operator and this approach proved to be accurate for boundaries largely aligned with the grid lines. Majumdar et al. used various multidimensional and high order polynomial interpolations schemes. These high order schemes, however, are keen to introduce wiggles and spurious extrema. Iaccarino & Verzicco and Kalitzin & Iaccarino proposed a tri-linear reconstruction for the velocity components and the turbulent scalars. A modified implementation that has proven to be more robust is reported in this paper. The issue of adequate near wall resolution in a Cartesian framework can initially be addressed by using a non-uniform mesh which is stretched near the surface. In this paper, we investigate an unstructured approach for local grid refinement that utilizes Cartesian mesh features. The computation of high Reynolds number wall bounded flows is particularly challenging as it requires the consideration of thin turbulent boundary layers, i.e. near wall regions with large gradients of the flow field variables. For such flows, the representation of the wall boundary has a large impact on the accuracy of the computation. It is also critical for the robustness and convergence of the flow solver.
Robust pattern decoding in shape-coded structured light
NASA Astrophysics Data System (ADS)
Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai
2017-09-01
Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.
Application of the Overclaiming Technique to Scholastic Assessment
ERIC Educational Resources Information Center
Paulhus, Delroy L.; Dubois, Patrick J.
2014-01-01
The overclaiming technique is a novel assessment procedure that uses signal detection analysis to generate indices of knowledge accuracy (OC-accuracy) and self-enhancement (OC-bias). The technique has previously shown robustness over varied knowledge domains as well as low reactivity across administration contexts. Here we compared the OC-accuracy…
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva
2014-08-01
To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.
Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva
2013-01-01
Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175
Development of 3D Oxide Fuel Mechanics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, B. W.; Casagranda, A.; Pitts, S. A.
This report documents recent work to improve the accuracy and robustness of the mechanical constitutive models used in the BISON fuel performance code. These developments include migration of the fuel mechanics models to be based on the MOOSE Tensor Mechanics module, improving the robustness of the smeared cracking model, implementing a capability to limit the time step size based on material model response, and improving the robustness of the return mapping iterations used in creep and plasticity models.
Evaluation for Water Conservation in Agriculture: Using a Multi-Method Econometric Approach
NASA Astrophysics Data System (ADS)
Ramirez, A.; Eaton, D. J.
2012-12-01
Since the 1960's, farmers have implemented new irrigation technology to increase crop production and planting acreage. At that time, technology responded to the increasing demand for food due to world population growth. Currently, the problem of decreased water supply threatens to limit agricultural production. Uncertain precipitation patterns, from prolonged droughts to irregular rains, will continue to hamper planting operations, and farmers are further limited by an increased competition for water from rapidly growing urban areas. Irrigation technology promises to reduce water usage while maintaining or increasing farm yields. The challenge for water managers and policy makers is to quantify and redistribute these efficiency gains as a source of 'new water.' Using conservation in farming as a source of 'new water' requires accurately quantifying the efficiency gains of irrigation technology under farmers' actual operations and practices. From a water resource management and policy perspective, the efficiency gains from conservation in farming can be redistributed to municipal, industrial and recreational uses. This paper presents a methodology that water resource managers can use to statistically verify the water savings attributable to conservation technology. The specific conservation technology examined in this study is precision leveling, and the study includes a mixed-methods approach using four different econometric models: Ordinary Least Squares, Fixed Effects, Propensity Score Matching, and Hierarchical Linear Models. These methods are used for ex-post program evaluation where random assignment is not possible, and they could be employed to evaluate agricultural conservation programs, where participation is often self-selected. The principal method taken in this approach is Hierarchical Linear Models (HLM), a useful model for agriculture because it incorporates the hierarchical nature of the data (fields, tenants, and landowners) as well as crop rotation (fields in and out of production). The other three methods provide verification of the accuracy of the HLM model and create a robust comparison of the water savings estimates. Seventeen factors were used to isolate the effect of precision leveling from variations in climate, investments in other irrigation improvements, and farmers' management skills. These statistical analyses yield accurate water savings estimates because they consider farmers' actual irrigation technology and practices. Results suggest that savings from water conservation technology under farmers' actual production systems and management are less than those reported by experimental field studies. These water savings measure the 'in situ' effect of the technology, considering farmers' actual irrigation practices and technology. In terms of the accuracy of the models, HLM provides the most precise estimate of the impact of precision leveling on a field's water usage. The HLM estimate was within the 95% confidence interval of the other three models, thus verifying the accuracy and robustness of the statistical findings and model.
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Gottlieb, David; Carpenter, Mark H.
1994-01-01
It has been previously shown that the temporal integration of hyperbolic partial differential equations (PDE's) may, because of boundary conditions, lead to deterioration of accuracy of the solution. A procedure for removal of this error in the linear case has been established previously. In the present paper we consider hyperbolic (PDE's) (linear and non-linear) whose boundary treatment is done via the SAT-procedure. A methodology is present for recovery of the full order of accuracy, and has been applied to the case of a 4th order explicit finite difference scheme.
NASA Astrophysics Data System (ADS)
Maalek, R.; Lichti, D. D.; Ruwanpura, J.
2015-08-01
The application of terrestrial laser scanners (TLSs) on construction sites for automating construction progress monitoring and controlling structural dimension compliance is growing markedly. However, current research in construction management relies on the planned building information model (BIM) to assign the accumulated point clouds to their corresponding structural elements, which may not be reliable in cases where the dimensions of the as-built structure differ from those of the planned model and/or the planned model is not available with sufficient detail. In addition outliers exist in construction site datasets due to data artefacts caused by moving objects, occlusions and dust. In order to overcome the aforementioned limitations, a novel method for robust classification and segmentation of planar and linear features is proposed to reduce the effects of outliers present in the LiDAR data collected from construction sites. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a robust clustering method. A method is also proposed to robustly extract the points belonging to the flat-slab floors and/or ceilings without performing the aforementioned stages in order to preserve computational efficiency. The applicability of the proposed method is investigated in two scenarios, namely, a laboratory with 30 million points and an actual construction site with over 150 million points. The results obtained by the two experiments validate the suitability of the proposed method for robust segmentation of planar and linear features in contaminated datasets, such as those collected from construction sites.
Zhang, Yiwei; Li, Jian; Meng, Zhiyun; Zhu, Xiaoxia; Gan, Hui; Gu, Ruolan; Wu, Zhuona; Zheng, Ying; Wei, Jinbin; Dou, Guifang
2017-06-15
17-Ethinyl-3,17-dihydroxyandrost-5-ene (EAD) is an agent designed for the treatment of acute radiation syndrome (ARS). Given its vital role played in the prevention and mitigation of ARS, the development of a sharp, sensitive and robust liquid chromatography tandem mass spectrometry (LC-MS/MS) method to monitor the metabolism of EAD in vivo was crucial. A new method was constructed and validated for the determination of EAD with the internal standard of androst-5-ene-3β,17β-diol (5-AED). The blood samples were precipitated with methanol, centrifuged, from which the supernatant was separated on UPLC with C18 column and eluted in gradient with acetonitrile and Milli-Q water both containing 0.1% formic acid (FA). Quantification was performed by a triple quadrupole mass spectrometer with electro spray ionization (ESI) in multiple reactive monitoring (MRM) positive mode. A good linearity was obtained with R>0.99 for EAD within its calibration range from 5 to 1000ngmL -1 with a lowest limit of quantification (LLOQ) of 5ngmL -1 . Inter- and intra-day accuracy and precision of three levels of quality control (QC) samples were within the range of 15%, while the LLOQ was within 20%. Samples were stable under the circumstances of the experiments. The method was simple, accurate and robust applied to determine the concentrations of EAD in Wistar rat after a single administration of EAD orally at the dose of 100mgkg -1 . Copyright © 2017 Elsevier B.V. All rights reserved.
Distinguishing body mass and activity level from the lower limb: can entheses diagnose obesity?
Godde, Kanya; Taylor, Rebecca Wilson
2013-03-10
The ability to estimate body size from the skeleton has broad applications, but is especially important to the forensic community when identifying unknown skeletal remains. This research investigates the utility of using entheses/muscle skeletal markers of the lower limb to estimate body size and to classify individuals into average, obese, and active categories, while using a biomechanical approach to interpret the results. Eighteen muscle attachment sites of the lower limb, known to be involved in the sit-to-stand transition, were scored for robusticity and stress in 105 white males (aged 31-81 years) from the William M. Bass Donated Skeletal Collection. Both logistic regression and log linear models were applied to the data to (1) test the utility of entheses as an indicator of body weight and activity level, and (2) to generate classification percentages that speak to the accuracy of the method. Thirteen robusticity scores differed significantly between the groups, but classification percentages were only slightly greater than chance. However, clear differences could be seen between the average and obese and the average and active groups. Stress scores showed no value in discriminating between groups. These results were interpreted in relation to biomechanical forces at the microscopic and macroscopic levels. Even though robusticity alone is not able to classify individuals well, its significance may show greater value when incorporated into a model that has multiple skeletal indicators. Further research needs to evaluate a larger sample and incorporate several lines of evidence to improve classification rates. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Kim, Min Kyung; Yang, Dong-Hyug; Jung, Mihye; Jung, Eun Ha; Eom, Han Young; Suh, Joon Hyuk; Min, Jung Won; Kim, Unyong; Min, Hyeyoung; Kim, Jinwoong; Han, Sang Beom
2011-09-16
Methods using high performance liquid chromatography with diode array detection (HPLC-DAD) and tandem mass spectrometry (HPLC-MS/MS) were developed and validated for the simultaneous determination of 5 chromones and 6 coumarins: prim-O-glucosylcimifugin (1), cimifugin (2), nodakenin (3), 4'-O-β-d-glucosyl-5-O-methylvisamminol (4), sec-O-glucosylhamaudol (5), psoralen (6), bergapten (7), imperatorin (8), phellopterin (9), 3'-O-angeloylhamaudol (10) and anomalin (11), in Radix Saposhnikoviae. The separation conditions for HPLC-DAD were optimized using an Ascentis Express C18 (4.6 mm×100 mm, 2.7 μm particle size) fused-core column. The mobile phase was composed of 10% aqueous acetonitrile (A) and 90% acetonitrile (B) and the elution was performed under a gradient mode at a flow rate of 1.0 mL/min. The detection wavelength was set at 300 nm. The HPLC-DAD method yielded a base line separation of the 11 components in 50% methanol extract of Radix Saposhnikoviae with no interfering peaks detected. The HPLC-DAD method was validated in terms of linearity, accuracy and precision (intra- and inter-day), limit of quantification (LOQ), recovery, and robustness. Specific determination of the 11 components was also accomplished by a triple quadrupole tandem mass spectrometer equipped with an electrospray ionization (ESI) source. This HPLC-MS/MS method was also validated by determining the linearity, limit of quantification, accuracy, and precision. Quantification of the 11 components in 51 commercial Radix Saposhnikoviae samples was successfully performed using the developed HPLC-DAD method. The identity, batch-to-batch consistency, and authenticity of Radix Saposhnikoviae were successfully monitored by the proposed HPLC-DAD and HPLC-MS/MS methods. Copyright © 2011 Elsevier B.V. All rights reserved.
Bhatt, Nejal M; Chavada, Vijay D; Sanyal, Mallika; Shrivastav, Pranav S
2016-11-18
A simple, accurate and precise high-performance thin-layer chromatographic method has been developed and validated for the analysis of proton pump inhibitors (PPIs) and their co-formulated drugs, available as binary combination. Planar chromatographic separation was achieved using a single mobile phase comprising of toluene: iso-propranol: acetone: ammonia 5.0:2.3:2.5:0.2 (v/v/v/v) for the analysis of 14 analytes on aluminium-backed layer of silica gel 60 FG 254 . Densitometric determination of the separated spots was done at 290nm. The method was validated according to ICH guidelines for linearity, precision and accuracy, sensitivity, specificity and robustness. The method showed good linear response for the selected drugs as indicated by the high values of correlation coefficients (≥0.9993). The limit of detection and limit of quantiation were in the range of 6.9-159.2ng/band and 20.8-478.1ng/band respectively for all the analytes. The optimized conditions afforded adequate resolution of each PPI from their co-formulated drugs and provided unambiguous identification of the co-formulated drugs from their homologous retardation factors (hR f ). The only limitation of the method was the inability to separate two PPIs, rabeprazole and lansoprazole from each other. Nevertheless, it is proposed that peak spectra recording and comparison with standard drug spot can be a viable option for assignment of TLC spots. The method performance was assessed by analyzing different laboratory simulated mixtures and some marketed formulations of the selected drugs. The developed method was successfully used to investigate potential counterfeit of PPIs through a series of simulated formulations with good accuracy and precision. Copyright © 2016 Elsevier B.V. All rights reserved.
Absolute GPS Positioning Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Ramillien, G.
A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
2004-01-01
This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. Other related issues in high order WENO finite difference and finite volume methods have also been investigated. methods are two classes of high order, high resolution methods suitable for convection dominated simulations with possible discontinuous or sharp gradient solutions. In [18], we first review these two classes of methods, pointing out their similarities and differences in algorithm formulation, theoretical properties, implementation issues, applicability, and relative advantages. We then present some quantitative comparisons of the third order finite volume WENO methods and discontinuous Galerkin methods for a series of test problems to assess their relative merits in accuracy and CPU timing. In [3], we review the development of the Runge-Kutta discontinuous Galerkin (RKDG) methods for non-linear convection-dominated problems. These robust and accurate methods have made their way into the main stream of computational fluid dynamics and are quickly finding use in a wide variety of applications. They combine a special class of Runge-Kutta time discretizations, that allows the method to be non-linearly stable regardless of its accuracy, with a finite element space discretization by discontinuous approximations, that incorporates the ideas of numerical fluxes and slope limiters coined during the remarkable development of the high-resolution finite difference and finite volume schemes. The resulting RKDG methods are stable, high-order accurate, and highly parallelizable schemes that can easily handle complicated geometries and boundary conditions. We review the theoretical and algorithmic aspects of these methods and show several applications including nonlinear conservation laws, the compressible and incompressible Navier-Stokes equations, and Hamilton-Jacobi-like equations.
Volpi, Nicola
2009-04-05
A new robust CE method for the determination of the glucosamine (GlcN) content in nutraceutical formulations is described after its derivatization with anthranilic acid (2-aminobenzoic acid, AA). The CE separation of derivatized GlcN with AA was performed on an uncoated fused-silica capillary tube (50 microm I.D.) using an operating pH 7.0 buffer of 150 mM boric acid/50 mM NaH2PO4 and UV detection at 214 nm. The method was validated for specificity, linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ). The detector response for GlcN was linear over the selected concentration range from 240 to 2400 pg (40-400 microg/mL) with a correlation coefficient greater than 0.980. The intra- and inter-day variations (CV%) were between 0.5 and 0.9 for migration time, and between 2.8 and 4.3 for peak area, respectively. The LOD and the LOQ of the method were approximately 200 and 500 pg, respectively. The intra- and inter-day accuracy was estimated to range from 2.8% to 5.1%, while the percent recoveries of GlcN in formulations were calculated to be about 100% after simple centrifugation for 10 min, lyophilization and derivatization with AA. The CE method was applied to the determination of GlcN content, in the form of GlcN-hydrochloride or GlcN-sulfate, of several nutraceutical preparations in the presence of other ingredients, i.e. chondroitin sulfate, vitamin C and/or methylsulfonylmethane (MSM) as well as salts and other agents. The quantitative results obtained were in total conformity with the label claims.
Dingari, Narahara Chari; Barman, Ishan; Kang, Jeon Woong; Kong, Chae-Ryon; Dasari, Ramachandra R.; Feld, Michael S.
2011-01-01
While Raman spectroscopy provides a powerful tool for noninvasive and real time diagnostics of biological samples, its translation to the clinical setting has been impeded by the lack of robustness of spectroscopic calibration models and the size and cumbersome nature of conventional laboratory Raman systems. Linear multivariate calibration models employing full spectrum analysis are often misled by spurious correlations, such as system drift and covariations among constituents. In addition, such calibration schemes are prone to overfitting, especially in the presence of external interferences that may create nonlinearities in the spectra-concentration relationship. To address both of these issues we incorporate residue error plot-based wavelength selection and nonlinear support vector regression (SVR). Wavelength selection is used to eliminate uninformative regions of the spectrum, while SVR is used to model the curved effects such as those created by tissue turbidity and temperature fluctuations. Using glucose detection in tissue phantoms as a representative example, we show that even a substantial reduction in the number of wavelengths analyzed using SVR lead to calibration models of equivalent prediction accuracy as linear full spectrum analysis. Further, with clinical datasets obtained from human subject studies, we also demonstrate the prospective applicability of the selected wavelength subsets without sacrificing prediction accuracy, which has extensive implications for calibration maintenance and transfer. Additionally, such wavelength selection could substantially reduce the collection time of serial Raman acquisition systems. Given the reduced footprint of serial Raman systems in relation to conventional dispersive Raman spectrometers, we anticipate that the incorporation of wavelength selection in such hardware designs will enhance the possibility of miniaturized clinical systems for disease diagnosis in the near future. PMID:21895336
Closed-loop stability of linear quadratic optimal systems in the presence of modeling errors
NASA Technical Reports Server (NTRS)
Toda, M.; Patel, R.; Sridhar, B.
1976-01-01
The well-known stabilizing property of linear quadratic state feedback design is utilized to evaluate the robustness of a linear quadratic feedback design in the presence of modeling errors. Two general conditions are obtained for allowable modeling errors such that the resulting closed-loop system remains stable. One of these conditions is applied to obtain two more particular conditions which are readily applicable to practical situations where a designer has information on the bounds of modeling errors. Relations are established between the allowable parameter uncertainty and the weighting matrices of the quadratic performance index, thereby enabling the designer to select appropriate weighting matrices to attain a robust feedback design.
Robust Nonlinear Feedback Control of Aircraft Propulsion Systems
NASA Technical Reports Server (NTRS)
Garrard, William L.; Balas, Gary J.; Litt, Jonathan (Technical Monitor)
2001-01-01
This is the final report on the research performed under NASA Glen grant NASA/NAG-3-1975 concerning feedback control of the Pratt & Whitney (PW) STF 952, a twin spool, mixed flow, after burning turbofan engine. The research focussed on the design of linear and gain-scheduled, multivariable inner-loop controllers for the PW turbofan engine using H-infinity and linear, parameter-varying (LPV) control techniques. The nonlinear turbofan engine simulation was provided by PW within the NASA Rocket Engine Transient Simulator (ROCETS) simulation software environment. ROCETS was used to generate linearized models of the turbofan engine for control design and analysis as well as the simulation environment to evaluate the performance and robustness of the controllers. Comparison between the H-infinity, and LPV controllers are made with the baseline multivariable controller and developed by Pratt & Whitney engineers included in the ROCETS simulation. Simulation results indicate that H-infinity and LPV techniques effectively achieve desired response characteristics with minimal cross coupling between commanded values and are very robust to unmodeled dynamics and sensor noise.
H.264/AVC digital fingerprinting based on spatio-temporal just noticeable distortion
NASA Astrophysics Data System (ADS)
Ait Saadi, Karima; Bouridane, Ahmed; Guessoum, Abderrezak
2014-01-01
This paper presents a robust adaptive embedding scheme using a modified Spatio-Temporal noticeable distortion (JND) model that is designed for tracing the distribution of the H.264/AVC video content and protecting them from unauthorized redistribution. The Embedding process is performed during coding process in selected macroblocks type Intra 4x4 within I-Frame. The method uses spread-spectrum technique in order to obtain robustness against collusion attacks and the JND model to dynamically adjust the embedding strength and control the energy of the embedded fingerprints so as to ensure their imperceptibility. Linear and non linear collusion attacks are performed to show the robustness of the proposed technique against collusion attacks while maintaining visual quality unchanged.
Robust Control of Uncertain Systems via Dissipative LQG-Type Controllers
NASA Technical Reports Server (NTRS)
Joshi, Suresh M.
2000-01-01
Optimal controller design is addressed for a class of linear, time-invariant systems which are dissipative with respect to a quadratic power function. The system matrices are assumed to be affine functions of uncertain parameters confined to a convex polytopic region in the parameter space. For such systems, a method is developed for designing a controller which is dissipative with respect to a given power function, and is simultaneously optimal in the linear-quadratic-Gaussian (LQG) sense. The resulting controller provides robust stability as well as optimal performance. Three important special cases, namely, passive, norm-bounded, and sector-bounded controllers, which are also LQG-optimal, are presented. The results give new methods for robust controller design in the presence of parametric uncertainties.
Robust biological parametric mapping: an improved technique for multimodal brain image analysis
NASA Astrophysics Data System (ADS)
Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.
2011-03-01
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
Applications of wavelets in interferometry and artificial vision
NASA Astrophysics Data System (ADS)
Escalona Z., Rafael A.
2001-08-01
In this paper we present a different point of view of phase measurements performed in interferometry, image processing and intelligent vision using Wavelet Transform. In standard and white-light interferometry, the phase function is retrieved by using phase-shifting, Fourier-Transform, cosinus-inversion and other known algorithms. Our novel technique presented here is faster, robust and shows excellent accuracy in phase determinations. Finally, in our second application, fringes are no more generate by some light interaction but result from the observation of adapted strip set patterns directly printed on the target of interest. The moving target is simply observed by a conventional vision system and usual phase computation algorithms are adapted to an image processing by wavelet transform, in order to sense target position and displacements with a high accuracy. In general, we have determined that wavelet transform presents properties of robustness, relative speed of calculus and very high accuracy in phase computations.
NASA Astrophysics Data System (ADS)
Gao, Xiangdong; Chen, Yuquan; You, Deyong; Xiao, Zhenlin; Chen, Xiaohui
2017-02-01
An approach for seam tracking of micro gap weld whose width is less than 0.1 mm based on magneto optical (MO) imaging technique during butt-joint laser welding of steel plates is investigated. Kalman filtering(KF) technology with radial basis function(RBF) neural network for weld detection by an MO sensor was applied to track the weld center position. Because the laser welding system process noises and the MO sensor measurement noises were colored noises, the estimation accuracy of traditional KF for seam tracking was degraded by the system model with extreme nonlinearities and could not be solved by the linear state-space model. Also, the statistics characteristics of noises could not be accurately obtained in actual welding. Thus, a RBF neural network was applied to the KF technique to compensate for the weld tracking errors. The neural network can restrain divergence filter and improve the system robustness. In comparison of traditional KF algorithm, the RBF with KF was not only more effectively in improving the weld tracking accuracy but also reduced noise disturbance. Experimental results showed that magneto optical imaging technique could be applied to detect micro gap weld accurately, which provides a novel approach for micro gap seam tracking.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-28
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
NASA Astrophysics Data System (ADS)
Lakey, Chad E.; Berry, Daniel R.; Sellers, Eric W.
2011-04-01
In this study, we examined the effects of a short mindfulness meditation induction (MMI) on the performance of a P300-based brain-computer interface (BCI) task. We expected that MMI would harness present-moment attentional resources, resulting in two positive consequences for P300-based BCI use. Specifically, we believed that MMI would facilitate increases in task accuracy and promote the production of robust P300 amplitudes. Sixteen-channel electroencephalographic data were recorded from 18 subjects using a row/column speller task paradigm. Nine subjects participated in a 6 min MMI and an additional nine subjects served as a control group. Subjects were presented with a 6 × 6 matrix of alphanumeric characters on a computer monitor. Stimuli were flashed at a stimulus onset asynchrony (SOA) of 125 ms. Calibration data were collected on 21 items without providing feedback. These data were used to derive a stepwise linear discriminate analysis classifier that was applied to an additional 14 items to evaluate accuracy. Offline performance analyses revealed that MMI subjects were significantly more accurate than control subjects. Likewise, MMI subjects produced significantly larger P300 amplitudes than control subjects at Cz and PO7. The discussion focuses on the potential attentional benefits of MMI for P300-based BCI performance.
Wang, Shunhai; Bobst, Cedric E; Kaltashov, Igor A
2015-01-01
Transferrin (Tf) is an 80 kDa iron-binding protein that is viewed as a promising drug carrier to target the central nervous system as a result of its ability to penetrate the blood-brain barrier. Among the many challenges during the development of Tf-based therapeutics, the sensitive and accurate quantitation of the administered Tf in cerebrospinal fluid (CSF) remains particularly difficult because of the presence of abundant endogenous Tf. Herein, we describe the development of a new liquid chromatography-mass spectrometry-based method for the sensitive and accurate quantitation of exogenous recombinant human Tf in rat CSF. By taking advantage of a His-tag present in recombinant Tf and applying Ni affinity purification, the exogenous human serum Tf can be greatly enriched from rat CSF, despite the presence of the abundant endogenous protein. Additionally, we applied a newly developed (18)O-labeling technique that can generate internal standards at the protein level, which greatly improved the accuracy and robustness of quantitation. The developed method was investigated for linearity, accuracy, precision, and lower limit of quantitation, all of which met the commonly accepted criteria for bioanalytical method validation.
Object-Based Dense Matching Method for Maintaining Structure Characteristics of Linear Buildings
Yan, Yiming; Qiu, Mingjie; Zhao, Chunhui; Wang, Liguo
2018-01-01
In this paper, we proposed a novel object-based dense matching method specially for the high-precision disparity map of building objects in urban areas, which can maintain accurate object structure characteristics. The proposed framework mainly includes three stages. Firstly, an improved edge line extraction method is proposed for the edge segments to fit closely to building outlines. Secondly, a fusion method is proposed for the outlines under the constraint of straight lines, which can maintain the building structural attribute with parallel or vertical edges, which is very useful for the dense matching method. Finally, we proposed an edge constraint and outline compensation (ECAOC) dense matching method to maintain building object structural characteristics in the disparity map. In the proposed method, the improved edge lines are used to optimize matching search scope and matching template window, and the high-precision building outlines are used to compensate the shape feature of building objects. Our method can greatly increase the matching accuracy of building objects in urban areas, especially at building edges. For the outline extraction experiments, our fusion method verifies the superiority and robustness on panchromatic images of different satellites and different resolutions. For the dense matching experiments, our ECOAC method shows great advantages for matching accuracy of building objects in urban areas compared with three other methods. PMID:29596393
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects.
Kim, Jinkwon; Min, Se Dong; Lee, Myoungho
2011-06-27
Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects
2011-01-01
Background Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. Methods In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. Results A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. Conclusions The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians. PMID:21707989
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-01
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496
He, Bo; Zhang, Shujing; Yan, Tianhong; Zhang, Tao; Liang, Yan; Zhang, Hongjin
2011-01-01
Mobile autonomous systems are very important for marine scientific investigation and military applications. Many algorithms have been studied to deal with the computational efficiency problem required for large scale simultaneous localization and mapping (SLAM) and its related accuracy and consistency. Among these methods, submap-based SLAM is a more effective one. By combining the strength of two popular mapping algorithms, the Rao-Blackwellised particle filter (RBPF) and extended information filter (EIF), this paper presents a combined SLAM-an efficient submap-based solution to the SLAM problem in a large scale environment. RBPF-SLAM is used to produce local maps, which are periodically fused into an EIF-SLAM algorithm. RBPF-SLAM can avoid linearization of the robot model during operating and provide a robust data association, while EIF-SLAM can improve the whole computational speed, and avoid the tendency of RBPF-SLAM to be over-confident. In order to further improve the computational speed in a real time environment, a binary-tree-based decision-making strategy is introduced. Simulation experiments show that the proposed combined SLAM algorithm significantly outperforms currently existing algorithms in terms of accuracy and consistency, as well as the computing efficiency. Finally, the combined SLAM algorithm is experimentally validated in a real environment by using the Victoria Park dataset.
Genomic selection in sugar beet breeding populations
2013-01-01
Background Genomic selection exploits dense genome-wide marker data to predict breeding values. In this study we used a large sugar beet population of 924 lines representing different germplasm types present in breeding populations: unselected segregating families and diverse lines from more advanced stages of selection. All lines have been intensively phenotyped in multi-location field trials for six agronomically important traits and genotyped with 677 SNP markers. Results We used ridge regression best linear unbiased prediction in combination with fivefold cross-validation and obtained high prediction accuracies for all except one trait. In addition, we investigated whether a calibration developed based on a training population composed of diverse lines is suited to predict the phenotypic performance within families. Our results show that the prediction accuracy is lower than that obtained within the diverse set of lines, but comparable to that obtained by cross-validation within the respective families. Conclusions The results presented in this study suggest that a training population derived from intensively phenotyped and genotyped diverse lines from a breeding program does hold potential to build up robust calibration models for genomic selection. Taken together, our results indicate that genomic selection is a valuable tool and can thus complement the genomics toolbox in sugar beet breeding. PMID:24047500
Automated brain volumetrics in multiple sclerosis: a step closer to clinical application
Beadnall, H N; Hatton, S N; Bader, G; Tomic, D; Silva, D G
2016-01-01
Background Whole brain volume (WBV) estimates in patients with multiple sclerosis (MS) correlate more robustly with clinical disability than traditional, lesion-based metrics. Numerous algorithms to measure WBV have been developed over the past two decades. We compare Structural Image Evaluation using Normalisation of Atrophy-Cross-sectional (SIENAX) to NeuroQuant and MSmetrix, for assessment of cross-sectional WBV in patients with MS. Methods MRIs from 61 patients with relapsing-remitting MS and 2 patients with clinically isolated syndrome were analysed. WBV measurements were calculated using SIENAX, NeuroQuant and MSmetrix. Statistical agreement between the methods was evaluated using linear regression and Bland-Altman plots. Precision and accuracy of WBV measurement was calculated for (1) NeuroQuant versus SIENAX and (2) MSmetrix versus SIENAX. Results Precision (Pearson's r) of WBV estimation for NeuroQuant and MSmetrix versus SIENAX was 0.983 and 0.992, respectively. Accuracy (Cb) was 0.871 and 0.994, respectively. NeuroQuant and MSmetrix showed a 5.5% and 1.0% volume difference compared with SIENAX, respectively, that was consistent across low and high values. Conclusions In the analysed population, NeuroQuant and MSmetrix both quantified cross-sectional WBV with comparable statistical agreement to SIENAX, a well-validated cross-sectional tool that has been used extensively in MS clinical studies. PMID:27071647
NASA Astrophysics Data System (ADS)
Ulu, Sevgi Tatar
2009-06-01
A highly sensitive spectrofluorimetric method was developed for the first time, for the analysis of three fluoroquinolones (FQ) antibacterials, namely enrofloxacin (ENR), levofloxacin (LEV) and ofloxacin (OFL) in pharmaceutical preparations through charge transfer (CT) complex formation with 2,3,5,6-tetrachloro- p-benzoquinone (chloranil,CLA). At the optimum reaction conditions, the FQ-CLA complexes showed excitation maxima ranging from 359 to 363 nm and emission maxima ranging from 442 to 488 nm. Rectilinear calibration graphs were obtained in the concentration range of 50-1000, 50-1000 and 25-500 ng mL -1 for ENR, LEV and OFL, respectively. The detection limit was found to be 17 ng mL -1 for ENR, 17 ng mL -1 for LEV, 8 ng mL -1 for OFL, respectively. Excipients used as additive in commercial formulations did not interfere in the analysis. The method was validated according to the ICH guidelines with respect to specificity, linearity, accuracy, precision and robustness. The proposed method was successfully applied to the analysis of pharmaceutical preparations. The results obtained were in good agreement with those obtained using the official method; no significant difference in the accuracy and precision as revealed by the accepted values of t- and F-tests, respectively.
Using a genetic algorithm to optimize a water-monitoring network for accuracy and cost effectiveness
NASA Astrophysics Data System (ADS)
Julich, R. J.
2004-05-01
The purpose of this project is to determine the optimal spatial distribution of water-monitoring wells to maximize important data collection and to minimize the cost of managing the network. We have employed a genetic algorithm (GA) towards this goal. The GA uses a simple fitness measure with two parts: the first part awards a maximal score to those combinations of hydraulic head observations whose net uncertainty is closest to the value representing all observations present, thereby maximizing accuracy; the second part applies a penalty function to minimize the number of observations, thereby minimizing the overall cost of the monitoring network. We used the linear statistical inference equation to calculate standard deviations on predictions from a numerical model generated for the 501-observation Death Valley Regional Flow System as the basis for our uncertainty calculations. We have organized the results to address the following three questions: 1) what is the optimal design strategy for a genetic algorithm to optimize this problem domain; 2) what is the consistency of solutions over several optimization runs; and 3) how do these results compare to what is known about the conceptual hydrogeology? Our results indicate the genetic algorithms are a more efficient and robust method for solving this class of optimization problems than have been traditional optimization approaches.
Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.
Erdem, Hamit
2010-10-01
Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pu, Zhiqiang; Tan, Xiangmin; Fan, Guoliang; Yi, Jianqiang
2014-08-01
Flexible air-breathing hypersonic vehicles feature significant uncertainties which pose huge challenges to robust controller designs. In this paper, four major categories of uncertainties are analyzed, that is, uncertainties associated with flexible effects, aerodynamic parameter variations, external environmental disturbances, and control-oriented modeling errors. A uniform nonlinear uncertainty model is explored for the first three uncertainties which lumps all uncertainties together and consequently is beneficial for controller synthesis. The fourth uncertainty is additionally considered in stability analysis. Based on these analyses, the starting point of the control design is to decompose the vehicle dynamics into five functional subsystems. Then a robust trajectory linearization control (TLC) scheme consisting of five robust subsystem controllers is proposed. In each subsystem controller, TLC is combined with the extended state observer (ESO) technique for uncertainty compensation. The stability of the overall closed-loop system with the four aforementioned uncertainties and additional singular perturbations is analyzed. Particularly, the stability of nonlinear ESO is also discussed from a Liénard system perspective. At last, simulations demonstrate the great control performance and the uncertainty rejection ability of the robust scheme.
NASA Astrophysics Data System (ADS)
Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar
2016-08-01
In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.
2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrington, David Bradley; Waters, Jiajia
Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.
Robust output tracking control of a laboratory helicopter for automatic landing
NASA Astrophysics Data System (ADS)
Liu, Hao; Lu, Geng; Zhong, Yisheng
2014-11-01
In this paper, robust output tracking control problem of a laboratory helicopter for automatic landing in high seas is investigated. The motion of the helicopter is required to synchronise with that of an oscillating platform, e.g. the deck of a vessel subject to wave-induced motions. A robust linear time-invariant output feedback controller consisting of a nominal controller and a robust compensator is designed. The robust compensator is introduced to restrain the influences of parametric uncertainties, nonlinearities and external disturbances. It is shown that robust stability and robust tracking property can be achieved simultaneously. Experimental results on the laboratory helicopter for automatic landing demonstrate the effectiveness of the designed control approach.
Kenngott, Hannes Götz; Preukschas, Anas Amin; Wagner, Martin; Nickel, Felix; Müller, Michael; Bellemann, Nadine; Stock, Christian; Fangerau, Markus; Radeleff, Boris; Kauczor, Hans-Ulrich; Meinzer, Hans-Peter; Maier-Hein, Lena; Müller-Stich, Beat Peter
2018-06-01
Augmented reality (AR) systems are currently being explored by a broad spectrum of industries, mainly for improving point-of-care access to data and images. Especially in surgery and especially for timely decisions in emergency cases, a fast and comprehensive access to images at the patient bedside is mandatory. Currently, imaging data are accessed at a distance from the patient both in time and space, i.e., at a specific workstation. Mobile technology and 3-dimensional (3D) visualization of radiological imaging data promise to overcome these restrictions by making bedside AR feasible. In this project, AR was realized in a surgical setting by fusing a 3D-representation of structures of interest with live camera images on a tablet computer using marker-based registration. The intent of this study was to focus on a thorough evaluation of AR. Feasibility, robustness, and accuracy were thus evaluated consecutively in a phantom model and a porcine model. Additionally feasibility was evaluated in one male volunteer. In the phantom model (n = 10), AR visualization was feasible in 84% of the visualization space with high accuracy (mean reprojection error ± standard deviation (SD): 2.8 ± 2.7 mm; 95th percentile = 6.7 mm). In a porcine model (n = 5), AR visualization was feasible in 79% with high accuracy (mean reprojection error ± SD: 3.5 ± 3.0 mm; 95th percentile = 9.5 mm). Furthermore, AR was successfully used and proved feasible within a male volunteer. Mobile, real-time, and point-of-care AR for clinical purposes proved feasible, robust, and accurate in the phantom, animal, and single-trial human model shown in this study. Consequently, AR following similar implementation proved robust and accurate enough to be evaluated in clinical trials assessing accuracy, robustness in clinical reality, as well as integration into the clinical workflow. If these further studies prove successful, AR might revolutionize data access at patient bedside.
Zheng, Dandan; Todor, Dorin A
2011-01-01
In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Knowing What You Know: Improving Metacomprehension and Calibration Accuracy in Digital Text
ERIC Educational Resources Information Center
Reid, Alan J.; Morrison, Gary R.; Bol, Linda
2017-01-01
This paper presents results from an experimental study that examined embedded strategy prompts in digital text and their effects on calibration and metacomprehension accuracies. A sample population of 80 college undergraduates read a digital expository text on the basics of photography. The most robust treatment (mixed) read the text, generated a…
Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty
2017-12-01
Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.
NASA Astrophysics Data System (ADS)
Thompson, A. P.; Swiler, L. P.; Trott, C. R.; Foiles, S. M.; Tucker, G. J.
2015-03-01
We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.
Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Leemans, Alexander; Jeurissen, Ben
2013-11-01
Linear least squares estimators are widely used in diffusion MRI for the estimation of diffusion parameters. Although adding proper weights is necessary to increase the precision of these linear estimators, there is no consensus on how to practically define them. In this study, the impact of the commonly used weighting strategies on the accuracy and precision of linear diffusion parameter estimators is evaluated and compared with the nonlinear least squares estimation approach. Simulation and real data experiments were done to study the performance of the weighted linear least squares estimators with weights defined by (a) the squares of the respective noisy diffusion-weighted signals; and (b) the squares of the predicted signals, which are reconstructed from a previous estimate of the diffusion model parameters. The negative effect of weighting strategy (a) on the accuracy of the estimator was surprisingly high. Multi-step weighting strategies yield better performance and, in some cases, even outperformed the nonlinear least squares estimator. If proper weighting strategies are applied, the weighted linear least squares approach shows high performance characteristics in terms of accuracy/precision and may even be preferred over nonlinear estimation methods. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1992-01-01
The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.
NASA Astrophysics Data System (ADS)
Azizi, S.; Torres, L. A. B.; Palhares, R. M.
2018-01-01
The regional robust stabilisation by means of linear time-invariant state feedback control for a class of uncertain MIMO nonlinear systems with parametric uncertainties and control input saturation is investigated. The nonlinear systems are described in a differential algebraic representation and the regional stability is handled considering the largest ellipsoidal domain-of-attraction (DOA) inside a given polytopic region in the state space. A novel set of sufficient Linear Matrix Inequality (LMI) conditions with new auxiliary decision variables are developed aiming to design less conservative linear state feedback controllers with corresponding larger DOAs, by considering the polytopic description of the saturated inputs. A few examples are presented showing favourable comparisons with recently published similar control design methodologies.
NASA Technical Reports Server (NTRS)
Jau, Bruno M.; McKinney, Colin; Smythe, Robert F.; Palmer, Dean L.
2011-01-01
An optical alignment mirror mechanism (AMM) has been developed with angular positioning accuracy of +/-0.2 arcsec. This requires the mirror s linear positioning actuators to have positioning resolutions of +/-112 nm to enable the mirror to meet the angular tip/tilt accuracy requirement. Demonstrated capabilities are 0.1 arc-sec angular mirror positioning accuracy, which translates into linear positioning resolutions at the actuator of 50 nm. The mechanism consists of a structure with sets of cross-directional flexures that enable the mirror s tip and tilt motion, a mirror with its kinematic mount, and two linear actuators. An actuator comprises a brushless DC motor, a linear ball screw, and a piezoelectric brake that holds the mirror s position while the unit is unpowered. An interferometric linear position sensor senses the actuator s position. The AMMs were developed for an Astrometric Beam Combiner (ABC) optical bench, which is part of an interferometer development. Custom electronics were also developed to accommodate the presence of multiple AMMs within the ABC and provide a compact, all-in-one solution to power and control the AMMs.
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
Robust control of the DC-DC boost converter based on the uncertainty and disturbance estimator
NASA Astrophysics Data System (ADS)
Oucheriah, Said
2017-11-01
In this paper, a robust non-linear controller based on the uncertainty and disturbance estimator (UDE) scheme is successfully developed and implemented for the output voltage regulation of the DC-DC boost converter. System uncertainties, external disturbances and unknown non-linear dynamics are lumped as a signal that is accurately estimated using a low-pass filter and their effects are cancelled by the controller. This methodology forms the basis of the UDE-based controller. A simple procedure is also developed that systematically determines the parameters of the controller to meet certain specifications. Using simulation, the effectiveness of the proposed controller is compared against the sliding-mode control (SMC). Experimental tests also show that the proposed controller is robust to system uncertainties, large input and load perturbations.
Reduced conservatism in stability robustness bounds by state transformation
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.; Liang, Z.
1986-01-01
This note addresses the issue of 'conservatism' in the time domain stability robustness bounds obtained by the Liapunov approach. A state transformation is employed to improve the upper bounds on the linear time-varying perturbation of an asymptotically stable linear time-invariant system for robust stability. This improvement is due to the variance of the conservatism of the Liapunov approach with respect to the basis of the vector space in which the Liapunov function is constructed. Improved bounds are obtained, using a transformation, on elemental and vector norms of perturbations (i.e., structured perturbations) as well as on a matrix norm of perturbations (i.e., unstructured perturbations). For the case of a diagonal transformation, an algorithm is proposed to find the 'optimal' transformation. Several examples are presented to illustrate the proposed analysis.
Large-scale linear programs in planning and prediction.
DOT National Transportation Integrated Search
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
The influence of delaying judgments of learning on metacognitive accuracy: a meta-analytic review.
Rhodes, Matthew G; Tauber, Sarah K
2011-01-01
Many studies have examined the accuracy of predictions of future memory performance solicited through judgments of learning (JOLs). Among the most robust findings in this literature is that delaying predictions serves to substantially increase the relative accuracy of JOLs compared with soliciting JOLs immediately after study, a finding termed the delayed JOL effect. The meta-analyses reported in the current study examined the predominant theoretical accounts as well as potential moderators of the delayed JOL effect. The first meta-analysis examined the relative accuracy of delayed compared with immediate JOLs across 4,554 participants (112 effect sizes) through gamma correlations between JOLs and memory accuracy. Those data showed that delaying JOLs leads to robust benefits to relative accuracy (g = 0.93). The second meta-analysis examined memory performance for delayed compared with immediate JOLs across 3,807 participants (98 effect sizes). Those data showed that delayed JOLs result in a modest but reliable benefit for memory performance relative to immediate JOLs (g = 0.08). Findings from these meta-analyses are well accommodated by theories suggesting that delayed JOL accuracy reflects access to more diagnostic information from long-term memory rather than being a by-product of a retrieval opportunity. However, these data also suggest that theories proposing that the delayed JOL effect results from a memorial benefit or the match between the cues available for JOLs and those available at test may also provide viable explanatory mechanisms necessary for a comprehensive account.
Kalmár, Eva; Gyuricza, Anett; Kunos-Tóth, Erika; Szakonyi, Gerda; Dombi, György
2014-01-01
Combined drug products have the advantages of better patient compliance and possible synergic effects. The simultaneous application of several active ingredients at a time is therefore frequently chosen. However, the quantitative analysis of such medicines can be challenging. The aim of this study is to provide a validated method for the investigation of a multidose packed oral powder that contained acetylsalicylic acid, paracetamol and papaverine-HCl. Reversed-phase high-pressure liquid chromatography was used. The Agilent Zorbax SB-C18 column was found to be the most suitable of the three different stationary phases tested for the separation of the components of this sample. The key parameters in the method development (apart from the nature of the column) were the pH of the aqueous phase (set to 3.4) and the ratio of the organic (acetonitrile) and the aqueous (25 mM phosphate buffer) phases, which was varied from 7:93 (v/v) to 25:75 (v/v) in a linear gradient, preceded by an initial hold. The method was validated: linearity, precision (repeatability and intermediate precision), accuracy, specificity and robustness were all tested, and the results met the ICH guidelines. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Joint graph cut and relative fuzzy connectedness image segmentation algorithm.
Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K
2013-12-01
We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC. Copyright © 2013 Elsevier B.V. All rights reserved.
Amoroso, N; Errico, R; Bruno, S; Chincarini, A; Garuccio, E; Sensi, F; Tangaro, S; Tateo, A; Bellotti, R
2015-11-21
In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer's Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice[Formula: see text] and Dice[Formula: see text]). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.
Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.
2015-01-01
In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871
NASA Astrophysics Data System (ADS)
Amoroso, N.; Errico, R.; Bruno, S.; Chincarini, A.; Garuccio, E.; Sensi, F.; Tangaro, S.; Tateo, A.; Bellotti, R.; Alzheimers Disease Neuroimaging Initiative,the
2015-11-01
In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer’s Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice{{}\\text{ADNI}} =0.929+/- 0.003 and Dice{{}\\text{OASIS}} =0.869+/- 0.002 ). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.
A novel strategy for forensic age prediction by DNA methylation and support vector regression model
Xu, Cheng; Qu, Hongzhu; Wang, Guangyu; Xie, Bingbing; Shi, Yi; Yang, Yaran; Zhao, Zhao; Hu, Lan; Fang, Xiangdong; Yan, Jiangwei; Feng, Lei
2015-01-01
High deviations resulting from prediction model, gender and population difference have limited age estimation application of DNA methylation markers. Here we identified 2,957 novel age-associated DNA methylation sites (P < 0.01 and R2 > 0.5) in blood of eight pairs of Chinese Han female monozygotic twins. Among them, nine novel sites (false discovery rate < 0.01), along with three other reported sites, were further validated in 49 unrelated female volunteers with ages of 20–80 years by Sequenom Massarray. A total of 95 CpGs were covered in the PCR products and 11 of them were built the age prediction models. After comparing four different models including, multivariate linear regression, multivariate nonlinear regression, back propagation neural network and support vector regression, SVR was identified as the most robust model with the least mean absolute deviation from real chronological age (2.8 years) and an average accuracy of 4.7 years predicted by only six loci from the 11 loci, as well as an less cross-validated error compared with linear regression model. Our novel strategy provides an accurate measurement that is highly useful in estimating the individual age in forensic practice as well as in tracking the aging process in other related applications. PMID:26635134
Multi-target detection and positioning in crowds using multiple camera surveillance
NASA Astrophysics Data System (ADS)
Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng
2018-04-01
In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.
Shaikh, K A; Patil, S D; Devkhile, A B
2008-12-15
A simple, precise and accurate reversed-phase liquid chromatographic method has been developed for the simultaneous estimation of ambroxol hydrochloride and azithromycin in tablet formulations. The chromatographic separation was achieved on a Xterra RP18 (250 mm x 4.6 mm, 5 microm) analytical column. A Mixture of acetonitrile-dipotassium phosphate (30 mM) (50:50, v/v) (pH 9.0) was used as the mobile phase, at a flow rate of 1.7 ml/min and detector wavelength at 215 nm. The retention time of ambroxol and azithromycin was found to be 5.0 and 11.5 min, respectively. The validation of the proposed method was carried out for specificity, linearity, accuracy, precision, limit of detection, limit of quantitation and robustness. The linear dynamic ranges were from 30-180 to 250-1500 microg/ml for ambroxol hydrochloride and azithromycin, respectively. The percentage recovery obtained for ambroxol hydrochloride and azithromycin were 99.40 and 99.90%, respectively. Limit of detection and quantification for azithromycin were 0.8 and 2.3 microg/ml, for ambroxol hydrochloride 0.004 and 0.01 microg/ml, respectively. The developed method can be used for routine quality control analysis of titled drugs in combination in tablet formulation.
Dearing, Chey G; Kilburn, Sally; Lindsay, Kevin S
2014-03-01
Sperm counts have been linked to several fertility outcomes making them an essential parameter of semen analysis. It has become increasingly recognised that Computer-Assisted Semen Analysis (CASA) provides improved precision over manual methods but that systems are seldom validated robustly for use. The objective of this study was to gather the evidence to validate or reject the Sperm Class Analyser (SCA) as a tool for routine sperm counting in a busy laboratory setting. The criteria examined were comparison with the Improved Neubauer and Leja 20-μm chambers, within and between field precision, sperm concentration linearity from a stock diluted in semen and media, accuracy against internal and external quality material, assessment of uneven flow effects and a receiver operating characteristic (ROC) analysis to predict fertility in comparison with the Neubauer method. This work demonstrates that SCA CASA technology is not a standalone 'black box', but rather a tool for well-trained staff that allows rapid, high-number sperm counting providing errors are identified and corrected. The system will produce accurate, linear, precise results, with less analytical variance than manual methods that correlate well against the Improved Neubauer chamber. The system provides superior predictive potential for diagnosing fertility problems.
2016-01-01
Rifaximin is an oral nonabsorbable antibiotic that acts locally in the gastrointestinal tract with minimal systemic adverse effects. It does not have spectrophotometric method ecofriendly in the ultraviolet region described in official compendiums and literature. The analytical techniques for determination of rifaximin reported in the literature require large amount of time to release results and are significantly onerous. Furthermore, they use toxic reagents both for the operator and environment and, therefore, cannot be considered environmentally friendly analytical techniques. The objective of this study was to develop and validate an ecofriendly spectrophotometric method in the ultraviolet region to quantify rifaximin in tablets. The method was validated, showing linearity, selectivity, precision, accuracy, and robustness. It was linear over the concentration range of 10–30 mg L−1 with correlation coefficients greater than 0.9999 and limits of detection and quantification of 1.39 and 4.22 mg L−1, respectively. The validated method is useful and applied for the routine quality control of rifaximin, since it is simple with inexpensive conditions and fast in the release of results, optimizes analysts and equipment, and uses environmentally friendly solvents, being considered a green method, which does not prejudice either the operator or the environment. PMID:27429835
Ulu, Sevgi Tatar; Tuncel, Muzaffer
2012-04-01
A novel precolumn derivatization reversed-phase high-performance liquid chromatography method with fluorescence detection is described for the determination of ranitidine in human plasma. The method was based on the reaction of ranitidine with 4-fluoro-7-nitrobenzo-2-oxa-1,3-diazole forming yellow colored fluorescent product. The separation was achieved on a C(18) column using methanol-water (60:40, v/v) mobile phase. Fluorescence detection was used at the excitation and emission of 458 and 521 nm, respectively. Lisinopril was utilized as an internal standard. The flow rate was 1.2 mL/min. Ranitidine and lisinopril appeared at 3.24 and 2.25 min, respectively. The method was validated for system suitability, precision, accuracy, linearity, limit of detection, limit of quantification, recovery and robustness. Intra- and inter-day precisions of the assays were in the range of 0.01-0.44%. The assay was linear over the concentration range of 50-2000 ng/mL. The mean recovery was determined to be 96.40 ± 0.02%. This method was successfully applied to a pharmacokinetic study after oral administration of a dose (150 mg) of ranitidine. © The Author [2012]. Published by Oxford University Press. All rights reserved.
NASA Astrophysics Data System (ADS)
Eyarkai Nambi, Vijayaram; Thangavel, Kuladaisamy; Manickavasagan, Annamalai; Shahir, Sultan
2017-01-01
Prediction of ripeness level in climacteric fruits is essential for post-harvest handling. An index capable of predicting ripening level with minimum inputs would be highly beneficial to the handlers, processors and researchers in fruit industry. A study was conducted with Indian mango cultivars to develop a ripeness index and associated model. Changes in physicochemical, colour and textural properties were measured throughout the ripening period and the period was classified into five stages (unripe, early ripe, partially ripe, ripe and over ripe). Multivariate regression techniques like partial least square regression, principal component regression and multi linear regression were compared and evaluated for its prediction. Multi linear regression model with 12 parameters was found more suitable in ripening prediction. Scientific variable reduction method was adopted to simplify the developed model. Better prediction was achieved with either 2 or 3 variables (total soluble solids, colour and acidity). Cross validation was done to increase the robustness and it was found that proposed ripening index was more effective in prediction of ripening stages. Three-variable model would be suitable for commercial applications where reasonable accuracies are sufficient. However, 12-variable model can be used to obtain more precise results in research and development applications.
Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping
NASA Astrophysics Data System (ADS)
Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta
2012-10-01
A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.
Handheld pose tracking using vision-inertial sensors with occlusion handling
NASA Astrophysics Data System (ADS)
Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried
2016-07-01
Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.
Chaos and Robustness in a Single Family of Genetic Oscillatory Networks
Fu, Daniel; Tan, Patrick; Kuznetsov, Alexey; Molkov, Yaroslav I.
2014-01-01
Genetic oscillatory networks can be mathematically modeled with delay differential equations (DDEs). Interpreting genetic networks with DDEs gives a more intuitive understanding from a biological standpoint. However, it presents a problem mathematically, for DDEs are by construction infinitely-dimensional and thus cannot be analyzed using methods common for systems of ordinary differential equations (ODEs). In our study, we address this problem by developing a method for reducing infinitely-dimensional DDEs to two- and three-dimensional systems of ODEs. We find that the three-dimensional reductions provide qualitative improvements over the two-dimensional reductions. We find that the reducibility of a DDE corresponds to its robustness. For non-robust DDEs that exhibit high-dimensional dynamics, we calculate analytic dimension lines to predict the dependence of the DDEs’ correlation dimension on parameters. From these lines, we deduce that the correlation dimension of non-robust DDEs grows linearly with the delay. On the other hand, for robust DDEs, we find that the period of oscillation grows linearly with delay. We find that DDEs with exclusively negative feedback are robust, whereas DDEs with feedback that changes its sign are not robust. We find that non-saturable degradation damps oscillations and narrows the range of parameter values for which oscillations exist. Finally, we deduce that natural genetic oscillators with highly-regular periods likely have solely negative feedback. PMID:24667178
Accuracy and reliability of stitched cone-beam computed tomography images.
Egbert, Nicholas; Cagna, David R; Ahuja, Swati; Wicks, Russell A
2015-03-01
This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets.
Exploiting structure: Introduction and motivation
NASA Technical Reports Server (NTRS)
Xu, Zhong Ling
1993-01-01
Research activities performed during the period of 29 June 1993 through 31 Aug. 1993 are summarized. The Robust Stability of Systems where transfer function or characteristic polynomial are multilinear affine functions of parameters of interest in two directions, Algorithmic and Theoretical, was developed. In the algorithmic direction, a new approach that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty is found. This technique is called 'Stability by linear process.' In fact, the 'Stability by linear process' described gives an algorithm. In analysis, we obtained a robustness criterion for the family of polynomials with coefficients of multilinear affine function in the coefficient space and obtained the result for the robust stability of diamond families of polynomials with complex coefficients also. We obtained the limited results for SPR design and we provide a framework for solving ACS. Finally, copies of the outline of our results are provided in the appendix. Also, there is an administration issue in the appendix.
Treuer, H; Hoevels, M; Luyken, K; Gierich, A; Kocher, M; Müller, R P; Sturm, V
2000-08-01
We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems.
NASA Astrophysics Data System (ADS)
Merkord, C. L.; Liu, Y.; DeVos, M.; Wimberly, M. C.
2015-12-01
Malaria early detection and early warning systems are important tools for public health decision makers in regions where malaria transmission is seasonal and varies from year to year with fluctuations in rainfall and temperature. Here we present a new data-driven dynamic linear model based on the Kalman filter with time-varying coefficients that are used to identify malaria outbreaks as they occur (early detection) and predict the location and timing of future outbreaks (early warning). We fit linear models of malaria incidence with trend and Fourier form seasonal components using three years of weekly malaria case data from 30 districts in the Amhara Region of Ethiopia. We identified past outbreaks by comparing the modeled prediction envelopes with observed case data. Preliminary results demonstrated the potential for improved accuracy and timeliness over commonly-used methods in which thresholds are based on simpler summary statistics of historical data. Other benefits of the dynamic linear modeling approach include robustness to missing data and the ability to fit models with relatively few years of training data. To predict future outbreaks, we started with the early detection model for each district and added a regression component based on satellite-derived environmental predictor variables including precipitation data from the Tropical Rainfall Measuring Mission (TRMM) and land surface temperature (LST) and spectral indices from the Moderate Resolution Imaging Spectroradiometer (MODIS). We included lagged environmental predictors in the regression component of the model, with lags chosen based on cross-correlation of the one-step-ahead forecast errors from the first model. Our results suggest that predictions of future malaria outbreaks can be improved by incorporating lagged environmental predictors.
Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.
Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine
2018-04-05
Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.
NASA Technical Reports Server (NTRS)
Melott, A. L.; Buchert, T.; Weib, A. G.
1995-01-01
We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of scales. The Lagrangian theory of gravitational instability of Friedmann-Lemaitre cosmogonies is compared with numerical simulations. We study the dynamics of hierarchical models as a second step. In the first step we analyzed the performance of the Lagrangian schemes for pancake models, the difference being that in the latter models the initial power spectrum is truncated. This work probed the quasi-linear and weakly non-linear regimes. We here explore whether the results found for pancake models carry over to hierarchical models which are evolved deeply into the non-linear regime. We smooth the initial data by using a variety of filter types and filter scales in order to determine the optimal performance of the analytical models, as has been done for the 'Zel'dovich-approximation' - hereafter TZA - in previous work. We find that for spectra with negative power-index the second-order scheme performs considerably better than TZA in terms of statistics which probe the dynamics, and slightly better in terms of low-order statistics like the power-spectrum. However, in contrast to the results found for pancake models, where the higher-order schemes get worse than TZA at late non-linear stages and on small scales, we here find that the second-order model is as robust as TZA, retaining the improvement at later stages and on smaller scales. In view of these results we expect that the second-order truncated Lagrangian model is especially useful for the modelling of standard dark matter models such as Hot-, Cold-, and Mixed-Dark-Matter.
Robust gaze-steering of an active vision system against errors in the estimated parameters
NASA Astrophysics Data System (ADS)
Han, Youngmo
2015-01-01
Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.
Xiao, Chuan-Le; Mai, Zhi-Biao; Lian, Xin-Lei; Zhong, Jia-Yong; Jin, Jing-Jie; He, Qing-Yu; Zhang, Gong
2014-01-01
Correct and bias-free interpretation of the deep sequencing data is inevitably dependent on the complete mapping of all mappable reads to the reference sequence, especially for quantitative RNA-seq applications. Seed-based algorithms are generally slow but robust, while Burrows-Wheeler Transform (BWT) based algorithms are fast but less robust. To have both advantages, we developed an algorithm FANSe2 with iterative mapping strategy based on the statistics of real-world sequencing error distribution to substantially accelerate the mapping without compromising the accuracy. Its sensitivity and accuracy are higher than the BWT-based algorithms in the tests using both prokaryotic and eukaryotic sequencing datasets. The gene identification results of FANSe2 is experimentally validated, while the previous algorithms have false positives and false negatives. FANSe2 showed remarkably better consistency to the microarray than most other algorithms in terms of gene expression quantifications. We implemented a scalable and almost maintenance-free parallelization method that can utilize the computational power of multiple office computers, a novel feature not present in any other mainstream algorithm. With three normal office computers, we demonstrated that FANSe2 mapped an RNA-seq dataset generated from an entire Illunima HiSeq 2000 flowcell (8 lanes, 608 M reads) to masked human genome within 4.1 hours with higher sensitivity than Bowtie/Bowtie2. FANSe2 thus provides robust accuracy, full indel sensitivity, fast speed, versatile compatibility and economical computational utilization, making it a useful and practical tool for deep sequencing applications. FANSe2 is freely available at http://bioinformatics.jnu.edu.cn/software/fanse2/.
Impact of Compounding Error on Strategies for Subtyping Pathogenic Bacteria
Orfe, Lisa; Davis, Margaret A.; Lafrentz, Stacey; Kang, Min-Su
2008-01-01
Abstract Comparative-omics will identify a multitude of markers that can be used for intraspecific discrimination between strains of bacteria. It seems intuitive that with this plethora of markers we can construct higher resolution subtyping assays using discrete markers to define strain “barcodes.” Unfortunately, with each new marker added to an assay, overall assay robustness declines because errors are compounded exponentially. For example, the difference in accuracy of strain classification for an assay with 60 markers will change from 99.9% to 54.7% when average probe accuracy declines from 99.999% to 99.0%. To illustrate this effect empirically, we constructed a 19 probe bead-array for subtyping Listeria monocytogenes and showed that despite seemingly reliable individual probe accuracy (>97%), our best classification results at the strain level were <75%. A more robust strategy would use as few markers as possible to achieve strain discrimination. Consequently, we developed two variable number of tandem repeat (VNTR) assays (Vibrio parahaemolyticus and L. monocytogenes) and demonstrate that these assays along with a published assay (Salmonella enterica) produce robust results when products were machine scored. The discriminatory ability with four to seven VNTR loci was comparable to pulsed-field gel electrophoresis. Passage experiments showed some instability with ca. 5% of passaged lines showing evidence for new alleles within 30 days (V. parahaemolyticus and S. enterica). Changes were limited to a single locus and allele so conservative rules can be used to determine strain matching. Most importantly, VNTRs appear robust and portable and can clearly discriminate between strains with relatively few loci thereby limiting effects of compounding error. PMID:18713065
Real-time In vivo Diagnosis of Nasopharyngeal Carcinoma Using Rapid Fiber-Optic Raman Spectroscopy.
Lin, Kan; Zheng, Wei; Lim, Chwee Ming; Huang, Zhiwei
2017-01-01
We report the utility of a simultaneous fingerprint (FP) (i.e., 800-1800 cm -1 ) and high-wavenumber (HW) (i.e., 2800-3600 cm -1 ) fiber-optic Raman spectroscopy developed for real-time in vivo diagnosis of nasopharyngeal carcinoma (NPC) at endoscopy. A total of 3731 high-quality in vivo FP/HW Raman spectra (normal=1765; cancer=1966) were acquired in real-time from 204 tissue sites (normal=95; cancer=109) of 95 subjects (normal=57; cancer=38) undergoing endoscopic examination. FP/HW Raman spectra differ significantly between normal and cancerous nasopharyngeal tissues that could be attributed to changes of proteins, lipids, nucleic acids, and the bound water content in NPC. Principal components analysis (PCA) and linear discriminant analysis (LDA) together with leave-one subject-out, cross-validation (LOO-CV) were implemented to develop robust Raman diagnostic models. The simultaneous FP/HW Raman spectroscopy technique together with PCA-LDA and LOO-CV modeling provides a diagnostic accuracy of 93.1% (sensitivity of 93.6%; specificity of 92.6%) for nasopharyngeal cancer identification, which is superior to using either FP (accuracy of 89.2%; sensitivity of 89.9%; specificity of 88.4%) or HW (accuracy of 89.7%; sensitivity of 89.0%; specificity of 90.5%) Raman technique alone. Further receiver operating characteristic (ROC) analysis reconfirms the best performance of the simultaneous FP/HW Raman technique for in vivo diagnosis of NPC. This work demonstrates for the first time that simultaneous FP/HW fiber-optic Raman spectroscopy technique has great promise for enhancing real-time in vivo cancer diagnosis in the nasopharynx during endoscopic examination.
Modern CACSD using the Robust-Control Toolbox
NASA Technical Reports Server (NTRS)
Chiang, Richard Y.; Safonov, Michael G.
1989-01-01
The Robust-Control Toolbox is a collection of 40 M-files which extend the capability of PC/PRO-MATLAB to do modern multivariable robust control system design. Included are robust analysis tools like singular values and structured singular values, robust synthesis tools like continuous/discrete H(exp 2)/H infinity synthesis and Linear Quadratic Gaussian Loop Transfer Recovery methods and a variety of robust model reduction tools such as Hankel approximation, balanced truncation and balanced stochastic truncation, etc. The capabilities of the toolbox are described and illustated with examples to show how easily they can be used in practice. Examples include structured singular value analysis, H infinity loop-shaping and large space structure model reduction.
Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman
2011-01-01
This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626
The effectiveness of robust RMCD control chart as outliers’ detector
NASA Astrophysics Data System (ADS)
Darmanto; Astutik, Suci
2017-12-01
A well-known control chart to monitor a multivariate process is Hotelling’s T 2 which its parameters are estimated classically, very sensitive and also marred by masking and swamping of outliers data effect. To overcome these situation, robust estimators are strongly recommended. One of robust estimators is re-weighted minimum covariance determinant (RMCD) which has robust characteristics as same as MCD. In this paper, the effectiveness term is accuracy of the RMCD control chart in detecting outliers as real outliers. In other word, how effectively this control chart can identify and remove masking and swamping effects of outliers. We assessed the effectiveness the robust control chart based on simulation by considering different scenarios: n sample sizes, proportion of outliers, number of p quality characteristics. We found that in some scenarios, this RMCD robust control chart works effectively.
Envelope responses in single-trial EEG indicate attended speaker in a 'cocktail party'.
Horton, Cort; Srinivasan, Ramesh; D'Zmura, Michael
2014-08-01
Recent studies have shown that auditory cortex better encodes the envelope of attended speech than that of unattended speech during multi-speaker ('cocktail party') situations. We investigated whether these differences were sufficiently robust within single-trial electroencephalographic (EEG) data to accurately determine where subjects attended. Additionally, we compared this measure to other established EEG markers of attention. High-resolution EEG was recorded while subjects engaged in a two-speaker 'cocktail party' task. Cortical responses to speech envelopes were extracted by cross-correlating the envelopes with each EEG channel. We also measured steady-state responses (elicited via high-frequency amplitude modulation of the speech) and alpha-band power, both of which have been sensitive to attention in previous studies. Using linear classifiers, we then examined how well each of these features could be used to predict the subjects' side of attention at various epoch lengths. We found that the attended speaker could be determined reliably from the envelope responses calculated from short periods of EEG, with accuracy improving as a function of sample length. Furthermore, envelope responses were far better indicators of attention than changes in either alpha power or steady-state responses. These results suggest that envelope-related signals recorded in EEG data can be used to form robust auditory BCI's that do not require artificial manipulation (e.g., amplitude modulation) of stimuli to function.
Analytical Parameters of an Amperometric Glucose Biosensor for Fast Analysis in Food Samples
2017-01-01
Amperometric biosensors based on the use of glucose oxidase (GOx) are able to combine the robustness of electrochemical techniques with the specificity of biological recognition processes. However, very little information can be found in literature about the fundamental analytical parameters of these sensors. In this work, the analytical behavior of an amperometric biosensor based on the immobilization of GOx using a hydrogel (Chitosan) onto highly ordered titanium dioxide nanotube arrays (TiO2NTAs) has been evaluated. The GOx–Chitosan/TiO2NTAs biosensor showed a sensitivity of 5.46 μA·mM−1 with a linear range from 0.3 to 1.5 mM; its fundamental analytical parameters were studied using a commercial soft drink. The obtained results proved sufficient repeatability (RSD = 1.9%), reproducibility (RSD = 2.5%), accuracy (95–105% recovery), and robustness (RSD = 3.3%). Furthermore, no significant interferences from fructose, ascorbic acid and citric acid were obtained. In addition, the storage stability was further examined, after 30 days, the GOx–Chitosan/TiO2NTAs biosensor retained 85% of its initial current response. Finally, the glucose content of different food samples was measured using the biosensor and compared with the respective HPLC value. In the worst scenario, a deviation smaller than 10% was obtained among the 20 samples evaluated. PMID:29135931
Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng
2018-04-20
Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.
Stability-Indicating HPLC Determination of Gemcitabine in Pharmaceutical Formulations
Singh, Rahul; Shakya, Ashok K.; Naik, Rajashri; Shalan, Naeem
2015-01-01
A simple, sensitive, inexpensive, and rapid stability indicating high performance liquid chromatographic method has been developed for determination of gemcitabine in injectable dosage forms using theophylline as internal standard. Chromatographic separation was achieved on a Phenomenex Luna C-18 column (250 mm × 4.6 mm; 5μ) with a mobile phase consisting of 90% water and 10% acetonitrile (pH 7.00 ± 0.05). The signals of gemcitabine and theophylline were recorded at 275 nm. Calibration curves were linear in the concentration range of 0.5–50 μg/mL. The correlation coefficient was 0.999 or higher. The limit of detection and limit of quantitation were 0.1498 and 0.4541 μg/mL, respectively. The inter- and intraday precision were less than 2%. Accuracy of the method ranged from 100.2% to 100.4%. Stability studies indicate that the drug was stable to sunlight and UV light. The drug gives 6 different hydrolytic products under alkaline stress and 3 in acidic condition. Aqueous and oxidative stress conditions also degrade the drug. Degradation was higher in the alkaline condition compared to other stress conditions. The robustness of the methods was evaluated using design of experiments. Validation reveals that the proposed method is specific, accurate, precise, reliable, robust, reproducible, and suitable for the quantitative analysis. PMID:25838825
Prieto, Ana I; Guzmán-Guillén, Remedios; Díez-Quijada, Leticia; Campos, Alexandre; Vasconcelos, Vitor; Jos, Ángeles; Cameán, Ana M
2018-02-01
Reports on the occurrence of the cyanobacterial toxin cylindrospermopsin (CYN) have increased worldwide because of CYN toxic effects in humans and animals. If contaminated waters are used for plant irrigation, these could represent a possible CYN exposure route for humans. For the first time, a method employing solid phase extraction and quantification by ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) of CYN was optimized in vegetables matrices such as lettuce ( Lactuca sativa ). The validated method showed a linear range, from 5 to 500 ng CYN g -1 of fresh weight (f.w.), and detection and quantitation limits (LOD and LOQ) of 0.22 and 0.42 ng CYN g -1 f.w., respectively. The mean recoveries ranged between 85 and 104%, and the intermediate precision from 12.7 to 14.7%. The method showed to be robust for the three different variables tested. Moreover, it was successfully applied to quantify CYN in edible lettuce leaves exposed to CYN-contaminated water (10 µg L -1 ), showing that the tolerable daily intake (TDI) in the case of CYN could be exceeded in elderly high consumers. The validated method showed good results in terms of sensitivity, precision, accuracy, and robustness for CYN determination in leaf vegetables such as lettuce. More studies are needed in order to prevent the risks associated with the consumption of CYN-contaminated vegetables.
Trong Bui, Duong; Nguyen, Nhan Duc; Jeong, Gu-Min
2018-06-25
Human activity recognition and pedestrian dead reckoning are an interesting field because of their importance utilities in daily life healthcare. Currently, these fields are facing many challenges, one of which is the lack of a robust algorithm with high performance. This paper proposes a new method to implement a robust step detection and adaptive distance estimation algorithm based on the classification of five daily wrist activities during walking at various speeds using a smart band. The key idea is that the non-parametric adaptive distance estimator is performed after two activity classifiers and a robust step detector. In this study, two classifiers perform two phases of recognizing five wrist activities during walking. Then, a robust step detection algorithm, which is integrated with an adaptive threshold, peak and valley correction algorithm, is applied to the classified activities to detect the walking steps. In addition, the misclassification activities are fed back to the previous layer. Finally, three adaptive distance estimators, which are based on a non-parametric model of the average walking speed, calculate the length of each strike. The experimental results show that the average classification accuracy is about 99%, and the accuracy of the step detection is 98.7%. The error of the estimated distance is 2.2⁻4.2% depending on the type of wrist activities.
NASA Astrophysics Data System (ADS)
Liu, Wanjun; Liang, Xuejian; Qu, Haicheng
2017-11-01
Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.