Sample records for optimal estimation methods

  1. Optimal Bandwidth for Multitaper Spectrum Estimation

    DOE PAGES

    Haley, Charlotte L.; Anitescu, Mihai

    2017-07-04

    A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less

  2. Influence of the optimization methods on neural state estimation quality of the drive system with elasticity.

    PubMed

    Orlowska-Kowalska, Teresa; Kaminski, Marcin

    2014-01-01

    The paper deals with the implementation of optimized neural networks (NNs) for state variable estimation of the drive system with an elastic joint. The signals estimated by NNs are used in the control structure with a state-space controller and additional feedbacks from the shaft torque and the load speed. High estimation quality is very important for the correct operation of a closed-loop system. The precision of state variables estimation depends on the generalization properties of NNs. A short review of optimization methods of the NN is presented. Two techniques typical for regularization and pruning methods are described and tested in detail: the Bayesian regularization and the Optimal Brain Damage methods. Simulation results show good precision of both optimized neural estimators for a wide range of changes of the load speed and the load torque, not only for nominal but also changed parameters of the drive system. The simulation results are verified in a laboratory setup.

  3. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  4. A measurement fusion method for nonlinear system identification using a cooperative learning algorithm.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-06-01

    Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.

  5. Influence of cost functions and optimization methods on solving the inverse problem in spatially resolved diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Rakotomanga, Prisca; Soussen, Charles; Blondel, Walter C. P. M.

    2017-03-01

    Diffuse reflectance spectroscopy (DRS) has been acknowledged as a valuable optical biopsy tool for in vivo characterizing pathological modifications in epithelial tissues such as cancer. In spatially resolved DRS, accurate and robust estimation of the optical parameters (OP) of biological tissues is a major challenge due to the complexity of the physical models. Solving this inverse problem requires to consider 3 components: the forward model, the cost function, and the optimization algorithm. This paper presents a comparative numerical study of the performances in estimating OP depending on the choice made for each of the latter components. Mono- and bi-layer tissue models are considered. Monowavelength (scalar) absorption and scattering coefficients are estimated. As a forward model, diffusion approximation analytical solutions with and without noise are implemented. Several cost functions are evaluated possibly including normalized data terms. Two local optimization methods, Levenberg-Marquardt and TrustRegion-Reflective, are considered. Because they may be sensitive to the initial setting, a global optimization approach is proposed to improve the estimation accuracy. This algorithm is based on repeated calls to the above-mentioned local methods, with initial parameters randomly sampled. Two global optimization methods, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are also implemented. Estimation performances are evaluated in terms of relative errors between the ground truth and the estimated values for each set of unknown OP. The combination between the number of variables to be estimated, the nature of the forward model, the cost function to be minimized and the optimization method are discussed.

  6. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  7. Attitude determination and parameter estimation using vector observations - Theory

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.

  8. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  9. An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172

  10. Estimation and detection information trade-off for x-ray system optimization

    NASA Astrophysics Data System (ADS)

    Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali

    2016-05-01

    X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.

  11. Estimation of power lithium-ion battery SOC based on fuzzy optimal decision

    NASA Astrophysics Data System (ADS)

    He, Dongmei; Hou, Enguang; Qiao, Xin; Liu, Guangmin

    2018-06-01

    In order to improve vehicle performance and safety, need to accurately estimate the power lithium battery state of charge (SOC), analyzing the common SOC estimation methods, according to the characteristics open circuit voltage and Kalman filter algorithm, using T - S fuzzy model, established a lithium battery SOC estimation method based on the fuzzy optimal decision. Simulation results show that the battery model accuracy can be improved.

  12. A chance constraint estimation approach to optimizing resource management under uncertainty

    Treesearch

    Michael Bevers

    2007-01-01

    Chance-constrained optimization is an important method for managing risk arising from random variations in natural resource systems, but the probabilistic formulations often pose mathematical programming problems that cannot be solved with exact methods. A heuristic estimation method for these problems is presented that combines a formulation for order statistic...

  13. Fast Quaternion Attitude Estimation from Two Vector Measurements

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. Existing closed-form attitude estimates based on Wahba's optimality criterion for two arbitrarily weighted observations are somewhat slow to evaluate. This paper presents two new fast quaternion attitude estimation algorithms using two vector observations, one optimal and one suboptimal. The suboptimal method gives the same estimate as the TRIAD algorithm, at reduced computational cost. Simulations show that the TRIAD estimate is almost as accurate as the optimal estimate in representative test scenarios.

  14. A two-step parameter optimization algorithm for improving estimation of optical properties using spatial frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Hu, Dong; Lu, Renfu; Ying, Yibin

    2018-03-01

    This research was aimed at optimizing the inverse algorithm for estimating the optical absorption (μa) and reduced scattering (μs‧) coefficients from spatial frequency domain diffuse reflectance. Studies were first conducted to determine the optimal frequency resolution and start and end frequencies in terms of the reciprocal of mean free path (1/mfp‧). The results showed that the optimal frequency resolution increased with μs‧ and remained stable when μs‧ was larger than 2 mm-1. The optimal end frequency decreased from 0.3/mfp‧ to 0.16/mfp‧ with μs‧ ranging from 0.4 mm-1 to 3 mm-1, while the optimal start frequency remained at 0 mm-1. A two-step parameter estimation method was proposed based on the optimized frequency parameters, which improved estimation accuracies by 37.5% and 9.8% for μa and μs‧, respectively, compared with the conventional one-step method. Experimental validations with seven liquid optical phantoms showed that the optimized algorithm resulted in the mean absolute errors of 15.4%, 7.6%, 5.0% for μa and 16.4%, 18.0%, 18.3% for μs‧ at the wavelengths of 675 nm, 700 nm, and 715 nm, respectively. Hence, implementation of the optimized parameter estimation method should be considered in order to improve the measurement of optical properties of biological materials when using spatial frequency domain imaging technique.

  15. Adaptive torque estimation of robot joint with harmonic drive transmission

    NASA Astrophysics Data System (ADS)

    Shi, Zhiguo; Li, Yuankai; Liu, Guangjun

    2017-11-01

    Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.

  16. Optimal Estimation of Clock Values and Trends from Finite Data

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    2005-01-01

    We show how to solve two problems of optimal linear estimation from a finite set of phase data. Clock noise is modeled as a stochastic process with stationary dth increments. The covariance properties of such a process are contained in the generalized autocovariance function (GACV). We set up two principles for optimal estimation: with the help of the GACV, these principles lead to a set of linear equations for the regression coefficients and some auxiliary parameters. The mean square errors of the estimators are easily calculated. The method can be used to check the results of other methods and to find good suboptimal estimators based on a small subset of the available data.

  17. Temporally diffeomorphic cardiac motion estimation from three-dimensional echocardiography by minimization of intensity consistency error.

    PubMed

    Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J; Song, Xubo

    2014-05-01

    Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Experiments with simulated datasets, images of anex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors' method. Simulated and real cardiac sequences tests showed that results in the authors' method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors' method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors' method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors' method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods.

  18. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  19. An evaluation of methods for estimating the number of local optima in combinatorial optimization problems.

    PubMed

    Hernando, Leticia; Mendiburu, Alexander; Lozano, Jose A

    2013-01-01

    The solution of many combinatorial optimization problems is carried out by metaheuristics, which generally make use of local search algorithms. These algorithms use some kind of neighborhood structure over the search space. The performance of the algorithms strongly depends on the properties that the neighborhood imposes on the search space. One of these properties is the number of local optima. Given an instance of a combinatorial optimization problem and a neighborhood, the estimation of the number of local optima can help not only to measure the complexity of the instance, but also to choose the most convenient neighborhood to solve it. In this paper we review and evaluate several methods to estimate the number of local optima in combinatorial optimization problems. The methods reviewed not only come from the combinatorial optimization literature, but also from the statistical literature. A thorough evaluation in synthetic as well as real problems is given. We conclude by providing recommendations of methods for several scenarios.

  20. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  1. Investigating the detection of multi-homed devices independent of operating systems

    DTIC Science & Technology

    2017-09-01

    timestamp data was used to estimate clock skews using linear regression and linear optimization methods. Analysis revealed that detection depends on...the consistency of the estimated clock skew. Through vertical testing, it was also shown that clock skew consistency depends on the installed...optimization methods. Analysis revealed that detection depends on the consistency of the estimated clock skew. Through vertical testing, it was also

  2. Big climate data analysis

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred

    2015-04-01

    The Big Data era has begun also in the climate sciences, not only in economics or molecular biology. We measure climate at increasing spatial resolution by means of satellites and look farther back in time at increasing temporal resolution by means of natural archives and proxy data. We use powerful supercomputers to run climate models. The model output of the calculations made for the IPCC's Fifth Assessment Report amounts to ~650 TB. The 'scientific evolution' of grid computing has started, and the 'scientific revolution' of quantum computing is being prepared. This will increase computing power, and data amount, by several orders of magnitude in the future. However, more data does not automatically mean more knowledge. We need statisticians, who are at the core of transforming data into knowledge. Statisticians notably also explore the limits of our knowledge (uncertainties, that is, confidence intervals and P-values). Mudelsee (2014 Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Second edition. Springer, Cham, xxxii + 454 pp.) coined the term 'optimal estimation'. Consider the hyperspace of climate estimation. It has many, but not infinite, dimensions. It consists of the three subspaces Monte Carlo design, method and measure. The Monte Carlo design describes the data generating process. The method subspace describes the estimation and confidence interval construction. The measure subspace describes how to detect the optimal estimation method for the Monte Carlo experiment. The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a data sample, some prior information (e.g. measurement standard errors) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace to find the suitable method, that is, the mode of estimation and uncertainty-measure determination that optimizes a selected measure for prescribed values close to the initial estimates. Also here, intelligent exploration methods (gradient, Brent, etc.) are useful. The third task is to apply the optimal estimation method to the climate dataset. This conference paper illustrates by means of three examples that optimal estimation has the potential to shape future big climate data analysis. First, we consider various hypothesis tests to study whether climate extremes are increasing in their occurrence. Second, we compare Pearson's and Spearman's correlation measures. Third, we introduce a novel estimator of the tail index, which helps to better quantify climate-change related risks.

  3. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  4. Control optimization of a lifting body entry problem by an improved and a modified method of perturbation function. Ph.D. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Garcia, F., Jr.

    1974-01-01

    A study of the solution problem of a complex entry optimization was studied. The problem was transformed into a two-point boundary value problem by using classical calculus of variation methods. Two perturbation methods were devised. These methods attempted to desensitize the contingency of the solution of this type of problem on the required initial co-state estimates. Also numerical results are presented for the optimal solution resulting from a number of different initial co-states estimates. The perturbation methods were compared. It is found that they are an improvement over existing methods.

  5. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  6. C-learning: A new classification framework to estimate optimal dynamic treatment regimes.

    PubMed

    Zhang, Baqun; Zhang, Min

    2017-12-11

    A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.

  7. Error analysis of finite element method for Poisson–Nernst–Planck equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou; Sun, Pengtao; Zheng, Bin

    A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

  8. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.

  9. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  10. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  11. Optimizing hidden layer node number of BP network to estimate fetal weight

    NASA Astrophysics Data System (ADS)

    Su, Juan; Zou, Yuanwen; Lin, Jiangli; Wang, Tianfu; Li, Deyu; Xie, Tao

    2007-12-01

    The ultrasonic estimation of fetal weigh before delivery is of most significance for obstetrical clinic. Estimating fetal weight more accurately is crucial for prenatal care, obstetrical treatment, choosing appropriate delivery methods, monitoring fetal growth and reducing the risk of newborn complications. In this paper, we introduce a method which combines golden section and artificial neural network (ANN) to estimate the fetal weight. The golden section is employed to optimize the hidden layer node number of the back propagation (BP) neural network. The method greatly improves the accuracy of fetal weight estimation, and simultaneously avoids choosing the hidden layer node number with subjective experience. The estimation coincidence rate achieves 74.19%, and the mean absolute error is 185.83g.

  12. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part II

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji

    This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.

  13. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis.

    PubMed

    Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo

    2011-10-11

    We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology.

  15. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    PubMed Central

    2011-01-01

    Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology. PMID:21989196

  16. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  17. A comparison between Gauss-Newton and Markov chain Monte Carlo basedmethods for inverting spectral induced polarization data for Cole-Coleparameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.

    2008-05-15

    We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less

  18. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    PubMed

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  19. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  20. Optimized Free Energies from Bidirectional Single-Molecule Force Spectroscopy

    NASA Astrophysics Data System (ADS)

    Minh, David D. L.; Adib, Artur B.

    2008-05-01

    An optimized method for estimating path-ensemble averages using data from processes driven in opposite directions is presented. Based on this estimator, bidirectional expressions for reconstructing free energies and potentials of mean force from single-molecule force spectroscopy—valid for biasing potentials of arbitrary stiffness—are developed. Numerical simulations on a model potential indicate that these methods perform better than unidirectional strategies.

  1. On Time Delay Margin Estimation for Adaptive Control and Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2011-01-01

    This paper presents methods for estimating time delay margin for adaptive control of input delay systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent an adaptive law by a locally bounded linear approximation within a small time window. The time delay margin of this input delay system represents a local stability measure and is computed analytically by three methods: Pade approximation, Lyapunov-Krasovskii method, and the matrix measure method. These methods are applied to the standard model-reference adaptive control, s-modification adaptive law, and optimal control modification adaptive law. The windowing analysis results in non-unique estimates of the time delay margin since it is dependent on the length of a time window and parameters which vary from one time window to the next. The optimal control modification adaptive law overcomes this limitation in that, as the adaptive gain tends to infinity and if the matched uncertainty is linear, then the closed-loop input delay system tends to a LTI system. A lower bound of the time delay margin of this system can then be estimated uniquely without the need for the windowing analysis. Simulation results demonstrates the feasibility of the bounded linear stability method for time delay margin estimation.

  2. Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.

    PubMed

    Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing

    2017-10-11

    In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.

  3. Inverse problems and optimal experiment design in unsteady heat transfer processes identification

    NASA Technical Reports Server (NTRS)

    Artyukhin, Eugene A.

    1991-01-01

    Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.

  4. Information's role in the estimation of chaotic signals

    NASA Astrophysics Data System (ADS)

    Drake, Daniel Fred

    1998-11-01

    Researchers have proposed several methods designed to recover chaotic signals from noise-corrupted observations. While the methods vary, their qualitative performance does not: in low levels of noise all methods effectively recover the underlying signal; in high levels of noise no method can recover the underlying signal to any meaningful degree of accuracy. Of the methods proposed to date, all represent sub-optimal estimators. So: Is the inability to recover the signal in high noise levels simply a consequence of estimator sub-optimality? Or is estimator failure actually a manifestation of some intrinsic property of chaos itself? These questions are answered by deriving an optimal estimator for a class of chaotic systems and noting that it, too, fails in high levels of noise. An exact, closed- form expression for the estimator is obtained for a class of chaotic systems whose signals are solutions to a set of linear (but noncausal) difference equations. The existence of this linear description circumvents the difficulties normally encountered when manipulating the nonlinear (but causal) expressions that govern. chaotic behavior. The reason why even the optimal estimator fails to recover underlying chaotic signals in high levels of noise has its roots in information theory. At such noise levels, the mutual information linking the corrupted observations to the underlying signal is essentially nil, reducing the estimator to a simple guessing strategy based solely on a priori statistics. Entropy, long the common bond between information theory and dynamical systems, is actually one aspect of a far more complete characterization of information sources: the rate distortion function. Determining the rate distortion function associated with the class of chaotic systems considered in this work provides bounds on estimator performance in high levels of noise. Finally, a slight modification of the linear description leads to a method of synthesizing on limited precision platforms ``pseudo-chaotic'' sequences that mimic true chaotic behavior to any finite degree of precision and duration. The use of such a technique in spread-spectrum communications is considered.

  5. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  6. Axioms of adaptivity

    PubMed Central

    Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.

    2014-01-01

    This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390

  7. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  8. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  9. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part I

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji; Sano, Kousuke

    This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.

  10. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445

  11. Efficient Regressions via Optimally Combining Quantile Information*

    PubMed Central

    Zhao, Zhibiao; Xiao, Zhijie

    2014-01-01

    We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481

  12. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  13. Best Design for Multidimensional Computerized Adaptive Testing With the Bifactor Model

    PubMed Central

    Seo, Dong Gi; Weiss, David J.

    2015-01-01

    Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm (MCAT) with a bifactor model using simulated data. Four item selection methods in MCAT were examined for three bifactor pattern designs using two multidimensional item response theory models. To compare MCAT item selection and estimation methods, a fixed test length was used. The Ds-optimality item selection improved θ estimates with respect to a general factor, and either D- or A-optimality improved estimates of the group factors in three bifactor pattern designs under two multidimensional item response theory models. The MCAT model without a guessing parameter functioned better than the MCAT model with a guessing parameter. The MAP (maximum a posteriori) estimation method provided more accurate θ estimates than the EAP (expected a posteriori) method under most conditions, and MAP showed lower observed standard errors than EAP under most conditions, except for a general factor condition using Ds-optimality item selection. PMID:29795848

  14. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    PubMed

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  15. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    PubMed Central

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  16. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2006-12-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.

  17. Continuous Shape Estimation of Continuum Robots Using X-ray Images

    PubMed Central

    Lobaton, Edgar J.; Fu, Jinghua; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot’s shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints. PMID:26279960

  18. Continuous Shape Estimation of Continuum Robots Using X-ray Images.

    PubMed

    Lobaton, Edgar J; Fu, Jinghua; Torres, Luis G; Alterovitz, Ron

    2013-05-06

    We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot's shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints.

  19. Statistical estimation via convex optimization for trending and performance monitoring

    NASA Astrophysics Data System (ADS)

    Samar, Sikandar

    This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.

  20. Sensitivity analysis of key components in large-scale hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.

    2008-12-01

    This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.

  1. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  2. Efficient Sampling of Parsimonious Inversion Histories with Application to Genome Rearrangement in Yersinia

    PubMed Central

    Darling, Aaron E.

    2009-01-01

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186

  3. An improved principal component analysis based region matching method for fringe direction estimation

    NASA Astrophysics Data System (ADS)

    He, A.; Quan, C.

    2018-04-01

    The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.

  4. Load forecasting via suboptimal seasonal autoregressive models and iteratively reweighted least squares estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbamalu, G.A.N.; El-Hawary, M.E.

    The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less

  5. Optimal regionalization of extreme value distributions for flood estimation

    NASA Astrophysics Data System (ADS)

    Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.

    2018-01-01

    Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.

  6. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  7. Combining optimization methods with response spectra curve-fitting toward improved damping ratio estimation

    NASA Astrophysics Data System (ADS)

    Brewick, Patrick T.; Smyth, Andrew W.

    2016-12-01

    The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.

  8. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  9. Optimal designs based on the maximum quasi-likelihood estimator

    PubMed Central

    Shen, Gang; Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359

  10. A method for energy window optimization for quantitative tasks that includes the effects of model-mismatch on bias: application to Y-90 bremsstrahlung SPECT imaging.

    PubMed

    Rong, Xing; Du, Yong; Frey, Eric C

    2012-06-21

    Quantitative Yttrium-90 ((90)Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging has shown great potential to provide reliable estimates of (90)Y activity distribution for targeted radionuclide therapy dosimetry applications. One factor that potentially affects the reliability of the activity estimates is the choice of the acquisition energy window. In contrast to imaging conventional gamma photon emitters where the acquisition energy windows are usually placed around photopeaks, there has been great variation in the choice of the acquisition energy window for (90)Y imaging due to the continuous and broad energy distribution of the bremsstrahlung photons. In quantitative imaging of conventional gamma photon emitters, previous methods for optimizing the acquisition energy window assumed unbiased estimators and used the variance in the estimates as a figure of merit (FOM). However, for situations, such as (90)Y imaging, where there are errors in the modeling of the image formation process used in the reconstruction there will be bias in the activity estimates. In (90)Y bremsstrahlung imaging this will be especially important due to the high levels of scatter, multiple scatter, and collimator septal penetration and scatter. Thus variance will not be a complete measure of reliability of the estimates and thus is not a complete FOM. To address this, we first aimed to develop a new method to optimize the energy window that accounts for both the bias due to model-mismatch and the variance of the activity estimates. We applied this method to optimize the acquisition energy window for quantitative (90)Y bremsstrahlung SPECT imaging in microsphere brachytherapy. Since absorbed dose is defined as the absorbed energy from the radiation per unit mass of tissues in this new method we proposed a mass-weighted root mean squared error of the volume of interest (VOI) activity estimates as the FOM. To calculate this FOM, two analytical expressions were derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.

  11. Robust Maneuvering Envelope Estimation Based on Reachability Analysis in an Optimal Control Formulation

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas; Schuet, Stefan R.; Wheeler, Kevin; Acosta, Diana; Kaneshige, John

    2013-01-01

    This paper discusses an algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. Starting with an optimal control formulation, the optimization problem can be rewritten as a Hamilton- Jacobi-Bellman equation. This equation can be solved by level set methods. This approach has been applied on an aircraft example involving structural airframe damage. Monte Carlo validation tests have confirmed that this approach is successful in estimating the safe maneuvering envelope for damaged aircraft.

  12. A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; Fang, Zhichao

    2014-01-01

    We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153

  13. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  14. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    PubMed

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  15. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Deukwoo; Little, Mark P.; Miller, Donald L.

    Purpose: To determine more accurate regression formulas for estimating peak skin dose (PSD) from reference air kerma (RAK) or kerma-area product (KAP). Methods: After grouping of the data from 21 procedures into 13 clinically similar groups, assessments were made of optimal clustering using the Bayesian information criterion to obtain the optimal linear regressions of (log-transformed) PSD vs RAK, PSD vs KAP, and PSD vs RAK and KAP. Results: Three clusters of clinical groups were optimal in regression of PSD vs RAK, seven clusters of clinical groups were optimal in regression of PSD vs KAP, and six clusters of clinical groupsmore » were optimal in regression of PSD vs RAK and KAP. Prediction of PSD using both RAK and KAP is significantly better than prediction of PSD with either RAK or KAP alone. The regression of PSD vs RAK provided better predictions of PSD than the regression of PSD vs KAP. The partial-pooling (clustered) method yields smaller mean squared errors compared with the complete-pooling method.Conclusion: PSD distributions for interventional radiology procedures are log-normal. Estimates of PSD derived from RAK and KAP jointly are most accurate, followed closely by estimates derived from RAK alone. Estimates of PSD derived from KAP alone are the least accurate. Using a stochastic search approach, it is possible to cluster together certain dissimilar types of procedures to minimize the total error sum of squares.« less

  17. Optimal Combinations of Diagnostic Tests Based on AUC.

    PubMed

    Huang, Xin; Qin, Gengsheng; Fang, Yixin

    2011-06-01

    When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.

  18. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    NASA Astrophysics Data System (ADS)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  19. Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.

    PubMed

    Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha

    2017-03-01

    This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.

  20. Estimating Scale Economies and the Optimal Size of School Districts: A Flexible Form Approach

    ERIC Educational Resources Information Center

    Schiltz, Fritz; De Witte, Kristof

    2017-01-01

    This paper investigates estimation methods to model the relationship between school district size, costs per student and the organisation of school districts. We show that the assumptions on the functional form strongly affect the estimated scale economies and offer two possible solutions to allow for more flexibility in the estimation method.…

  1. Spatio-Temporal Field Estimation Using Kriged Kalman Filter (KKF) with Sparsity-Enforcing Sensor Placement.

    PubMed

    Roy, Venkat; Simonetto, Andrea; Leus, Geert

    2018-06-01

    We propose a sensor placement method for spatio-temporal field estimation based on a kriged Kalman filter (KKF) using a network of static or mobile sensors. The developed framework dynamically designs the optimal constellation to place the sensors. We combine the estimation error (for the stationary as well as non-stationary component of the field) minimization problem with a sparsity-enforcing penalty to design the optimal sensor constellation in an economic manner. The developed sensor placement method can be directly used for a general class of covariance matrices (ill-conditioned or well-conditioned) modelling the spatial variability of the stationary component of the field, which acts as a correlated observation noise, while estimating the non-stationary component of the field. Finally, a KKF estimator is used to estimate the field using the measurements from the selected sensing locations. Numerical results are provided to exhibit the feasibility of the proposed dynamic sensor placement followed by the KKF estimation method.

  2. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  3. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  4. Correcting for Optimistic Prediction in Small Data Sets

    PubMed Central

    Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.

    2014-01-01

    The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219

  5. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  6. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  7. Evolution of Query Optimization Methods

    NASA Astrophysics Data System (ADS)

    Hameurlain, Abdelkader; Morvan, Franck

    Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).

  8. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  9. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    NASA Astrophysics Data System (ADS)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  10. Implicit methods for efficient musculoskeletal simulation and optimal control

    PubMed Central

    van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter

    2011-01-01

    The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983

  11. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  12. Particle swarm optimization and gravitational wave data analysis: Performance on a binary inspiral testbed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Yan; Mohanty, Soumya D.; Center for Gravitational Wave Astronomy, Department of Physics and Astronomy, University of Texas at Brownsville, 80 Fort Brown, Brownsville, Texas 78520

    2010-03-15

    The detection and estimation of gravitational wave signals belonging to a parameterized family of waveforms requires, in general, the numerical maximization of a data-dependent function of the signal parameters. Because of noise in the data, the function to be maximized is often highly multimodal with numerous local maxima. Searching for the global maximum then becomes computationally expensive, which in turn can limit the scientific scope of the search. Stochastic optimization is one possible approach to reducing computational costs in such applications. We report results from a first investigation of the particle swarm optimization method in this context. The method ismore » applied to a test bed motivated by the problem of detection and estimation of a binary inspiral signal. Our results show that particle swarm optimization works well in the presence of high multimodality, making it a viable candidate method for further applications in gravitational wave data analysis.« less

  13. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  14. Bayesian Spatial Design of Optimal Deep Tubewell Locations in Matlab, Bangladesh.

    PubMed

    Warren, Joshua L; Perez-Heydrich, Carolina; Yunus, Mohammad

    2013-09-01

    We introduce a method for statistically identifying the optimal locations of deep tubewells (dtws) to be installed in Matlab, Bangladesh. Dtw installations serve to mitigate exposure to naturally occurring arsenic found at groundwater depths less than 200 meters, a serious environmental health threat for the population of Bangladesh. We introduce an objective function, which incorporates both arsenic level and nearest town population size, to identify optimal locations for dtw placement. Assuming complete knowledge of the arsenic surface, we then demonstrate how minimizing the objective function over a domain favors dtws placed in areas with high arsenic values and close to largely populated regions. Given only a partial realization of the arsenic surface over a domain, we use a Bayesian spatial statistical model to predict the full arsenic surface and estimate the optimal dtw locations. The uncertainty associated with these estimated locations is correctly characterized as well. The new method is applied to a dataset from a village in Matlab and the estimated optimal locations are analyzed along with their respective 95% credible regions.

  15. Alternative evaluation metrics for risk adjustment methods.

    PubMed

    Park, Sungchul; Basu, Anirban

    2018-06-01

    Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Artifacts in digital coincidence timing

    DOE PAGES

    Moses, W. W.; Peng, Q.

    2014-10-16

    Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less

  17. Artifacts in digital coincidence timing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moses, W. W.; Peng, Q.

    Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less

  18. Bias in error estimation when using cross-validation for model selection.

    PubMed

    Varma, Sudhir; Simon, Richard

    2006-02-23

    Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.

  19. An improved multi-paths optimization method for video stabilization

    NASA Astrophysics Data System (ADS)

    Qin, Tao; Zhong, Sheng

    2018-03-01

    For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.

  20. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  1. Forecasting peaks of seasonal influenza epidemics.

    PubMed

    Nsoesie, Elaine; Mararthe, Madhav; Brownstein, John

    2013-06-21

    We present a framework for near real-time forecast of influenza epidemics using a simulation optimization approach. The method combines an individual-based model and a simple root finding optimization method for parameter estimation and forecasting. In this study, retrospective forecasts were generated for seasonal influenza epidemics using web-based estimates of influenza activity from Google Flu Trends for 2004-2005, 2007-2008 and 2012-2013 flu seasons. In some cases, the peak could be forecasted 5-6 weeks ahead. This study adds to existing resources for influenza forecasting and the proposed method can be used in conjunction with other approaches in an ensemble framework.

  2. Updated Magmatic Flux Rate Estimates for the Hawaii Plume

    NASA Astrophysics Data System (ADS)

    Wessel, P.

    2013-12-01

    Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.

  3. Approximate approach for optimization space flights with a low thrust on the basis of sufficient optimality conditions

    NASA Astrophysics Data System (ADS)

    Salmin, Vadim V.

    2017-01-01

    Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.

  4. Experimental Design for Parameter Estimation of Gene Regulatory Networks

    PubMed Central

    Timmer, Jens

    2012-01-01

    Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs) are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM). This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines. PMID:22815723

  5. An optimized knife-edge method for on-orbit MTF estimation of optical sensors using powell parameter fitting

    NASA Astrophysics Data System (ADS)

    Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue

    2017-08-01

    On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.

  6. Robust estimation for ordinary differential equation models.

    PubMed

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  7. Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2010-01-01

    This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi

    ExoEarth yield is a critical science metric for future exoplanet imaging missions. Here we estimate exoEarth candidate yield using single visit completeness for a variety of mission design and astrophysical parameters. We review the methods used in previous yield calculations and show that the method choice can significantly impact yield estimates as well as how the yield responds to mission parameters. We introduce a method, called Altruistic Yield Optimization, that optimizes the target list and exposure times to maximize mission yield, adapts maximally to changes in mission parameters, and increases exoEarth candidate yield by up to 100% compared to previousmore » methods. We use Altruistic Yield Optimization to estimate exoEarth candidate yield for a large suite of mission and astrophysical parameters using single visit completeness. We find that exoEarth candidate yield is most sensitive to telescope diameter, followed by coronagraph inner working angle, followed by coronagraph contrast, and finally coronagraph contrast noise floor. We find a surprisingly weak dependence of exoEarth candidate yield on exozodi level. Additionally, we provide a quantitative approach to defining a yield goal for future exoEarth-imaging missions.« less

  9. A concept for adaptive performance optimization on commercial transport aircraft

    NASA Technical Reports Server (NTRS)

    Jackson, Michael R.; Enns, Dale F.

    1995-01-01

    An adaptive control method is presented for the minimization of drag during flight for transport aircraft. The minimization of drag is achieved by taking advantage of the redundant control capability available in the pitch axis, with the horizontal tail used as the primary surface and symmetric deflection of the ailerons and cruise flaps used as additional controls. The additional control surfaces are excited with sinusoidal signals, while the altitude and velocity loops are closed with guidance and control laws. A model of the throttle response as a function of the additional control surfaces is formulated and the parameters in the model are estimated from the sensor measurements using a least squares estimation method. The estimated model is used to determine the minimum drag positions of the control surfaces. The method is presented for the optimization of one and two additional control surfaces. The adaptive control method is extended to optimize rate of climb with the throttle fixed. Simulations that include realistic disturbances are presented, as well as the results of a Monte Carlo simulation analysis that shows the effects of changing the disturbance environment and the excitation signal parameters.

  10. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  11. Robust learning for optimal treatment decision with NP-dimensionality

    PubMed Central

    Shi, Chengchun; Song, Rui; Lu, Wenbin

    2016-01-01

    In order to identify important variables that are involved in making optimal treatment decision, Lu, Zhang and Zeng (2013) proposed a penalized least squared regression framework for a fixed number of predictors, which is robust against the misspecification of the conditional mean model. Two problems arise: (i) in a world of explosively big data, effective methods are needed to handle ultra-high dimensional data set, for example, with the dimension of predictors is of the non-polynomial (NP) order of the sample size; (ii) both the propensity score and conditional mean models need to be estimated from data under NP dimensionality. In this paper, we propose a robust procedure for estimating the optimal treatment regime under NP dimensionality. In both steps, penalized regressions are employed with the non-concave penalty function, where the conditional mean model of the response given predictors may be misspecified. The asymptotic properties, such as weak oracle properties, selection consistency and oracle distributions, of the proposed estimators are investigated. In addition, we study the limiting distribution of the estimated value function for the obtained optimal treatment regime. The empirical performance of the proposed estimation method is evaluated by simulations and an application to a depression dataset from the STAR*D study. PMID:28781717

  12. Stationary variational estimates for the effective response and field fluctuations in nonlinear composites

    NASA Astrophysics Data System (ADS)

    Ponte Castañeda, Pedro

    2016-11-01

    This paper presents a variational method for estimating the effective constitutive response of composite materials with nonlinear constitutive behavior. The method is based on a stationary variational principle for the macroscopic potential in terms of the corresponding potential of a linear comparison composite (LCC) whose properties are the trial fields in the variational principle. When used in combination with estimates for the LCC that are exact to second order in the heterogeneity contrast, the resulting estimates for the nonlinear composite are also guaranteed to be exact to second-order in the contrast. In addition, the new method allows full optimization with respect to the properties of the LCC, leading to estimates that are fully stationary and exhibit no duality gaps. As a result, the effective response and field statistics of the nonlinear composite can be estimated directly from the appropriately optimized linear comparison composite. By way of illustration, the method is applied to a porous, isotropic, power-law material, and the results are found to compare favorably with earlier bounds and estimates. However, the basic ideas of the method are expected to work for broad classes of composites materials, whose effective response can be given appropriate variational representations, including more general elasto-plastic and soft hyperelastic composites and polycrystals.

  13. Optimization of advanced Wiener estimation methods for Raman reconstruction from narrow-band measurements in the presence of fluorescence background

    PubMed Central

    Chen, Shuo; Ong, Yi Hong; Lin, Xiaoqian; Liu, Quan

    2015-01-01

    Raman spectroscopy has shown great potential in biomedical applications. However, intrinsically weak Raman signals cause slow data acquisition especially in Raman imaging. This problem can be overcome by narrow-band Raman imaging followed by spectral reconstruction. Our previous study has shown that Raman spectra free of fluorescence background can be reconstructed from narrow-band Raman measurements using traditional Wiener estimation. However, fluorescence-free Raman spectra are only available from those sophisticated Raman setups capable of fluorescence suppression. The reconstruction of Raman spectra with fluorescence background from narrow-band measurements is much more challenging due to the significant variation in fluorescence background. In this study, two advanced Wiener estimation methods, i.e. modified Wiener estimation and sequential weighted Wiener estimation, were optimized to achieve this goal. Both spontaneous Raman spectra and surface enhanced Raman spectra were evaluated. Compared with traditional Wiener estimation, two advanced methods showed significant improvement in the reconstruction of spontaneous Raman spectra. However, traditional Wiener estimation can work as effectively as the advanced methods for SERS spectra but much faster. The wise selection of these methods would enable accurate Raman reconstruction in a simple Raman setup without the function of fluorescence suppression for fast Raman imaging. PMID:26203387

  14. Reader reaction to "a robust method for estimating optimal treatment regimes" by Zhang et al. (2012).

    PubMed

    Taylor, Jeremy M G; Cheng, Wenting; Foster, Jared C

    2015-03-01

    A recent article (Zhang et al., 2012, Biometrics 168, 1010-1018) compares regression based and inverse probability based methods of estimating an optimal treatment regime and shows for a small number of covariates that inverse probability weighted methods are more robust to model misspecification than regression methods. We demonstrate that using models that fit the data better reduces the concern about non-robustness for the regression methods. We extend the simulation study of Zhang et al. (2012, Biometrics 168, 1010-1018), also considering the situation of a larger number of covariates, and show that incorporating random forests into both regression and inverse probability weighted based methods improves their properties. © 2014, The International Biometric Society.

  15. Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.

    PubMed

    Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew

    2017-08-10

    Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.

  16. Reliability optimization design of the gear modification coefficient based on the meshing stiffness

    NASA Astrophysics Data System (ADS)

    Wang, Qianqian; Wang, Hui

    2018-04-01

    Since the time varying meshing stiffness of gear system is the key factor affecting gear vibration, it is important to design the meshing stiffness to reduce vibration. Based on the effect of gear modification coefficient on the meshing stiffness, considering the random parameters, reliability optimization design of the gear modification is researched. The dimension reduction and point estimation method is used to estimate the moment of the limit state function, and the reliability is obtained by the forth moment method. The cooperation of the dynamic amplitude results before and after optimization indicates that the research is useful for the reduction of vibration and noise and the improvement of the reliability.

  17. Optimal multi-dimensional poverty lines: The state of poverty in Iraq

    NASA Astrophysics Data System (ADS)

    Ameen, Jamal R. M.

    2017-09-01

    Poverty estimation based on calories intake is unrealistic. The established concept of multidimensional poverty has methodological weaknesses in the treatment of different dimensions and there is disagreement in methods of combining them into a single poverty line. This paper introduces a methodology to estimate optimal multidimensional poverty lines and uses the Iraqi household socio-economic survey data of 2012 to demonstrate the idea. The optimal poverty line for Iraq is found to be 170.5 Thousand Iraqi Dinars (TID).

  18. A Doppler centroid estimation algorithm for SAR systems optimized for the quasi-homogeneous source

    NASA Technical Reports Server (NTRS)

    Jin, Michael Y.

    1989-01-01

    Radar signal processing applications frequently require an estimate of the Doppler centroid of a received signal. The Doppler centroid estimate is required for synthetic aperture radar (SAR) processing. It is also required for some applications involving target motion estimation and antenna pointing direction estimation. In some cases, the Doppler centroid can be accurately estimated based on available information regarding the terrain topography, the relative motion between the sensor and the terrain, and the antenna pointing direction. Often, the accuracy of the Doppler centroid estimate can be improved by analyzing the characteristics of the received SAR signal. This kind of signal processing is also referred to as clutterlock processing. A Doppler centroid estimation (DCE) algorithm is described which contains a linear estimator optimized for the type of terrain surface that can be modeled by a quasi-homogeneous source (QHS). Information on the following topics is presented: (1) an introduction to the theory of Doppler centroid estimation; (2) analysis of the performance characteristics of previously reported DCE algorithms; (3) comparison of these analysis results with experimental results; (4) a description and performance analysis of a Doppler centroid estimator which is optimized for a QHS; and (5) comparison of the performance of the optimal QHS Doppler centroid estimator with that of previously reported methods.

  19. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  20. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  1. Parameter estimation of a pulp digester model with derivative-free optimization strategies

    NASA Astrophysics Data System (ADS)

    Seiça, João C.; Romanenko, Andrey; Fernandes, Florbela P.; Santos, Lino O.; Fernandes, Natércia C. P.

    2017-07-01

    The work concerns the parameter estimation in the context of the mechanistic modelling of a pulp digester. The problem is cast as a box bounded nonlinear global optimization problem in order to minimize the mismatch between the model outputs with the experimental data observed at a real pulp and paper plant. MCSFilter and Simulated Annealing global optimization methods were used to solve the optimization problem. While the former took longer to converge to the global minimum, the latter terminated faster at a significantly higher value of the objective function and, thus, failed to find the global solution.

  2. Variational estimate method for solving autonomous ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Mungkasi, Sudi

    2018-04-01

    In this paper, we propose a method for solving first-order autonomous ordinary differential equation problems using a variational estimate formulation. The variational estimate is constructed with a Lagrange multiplier which is chosen optimally, so that the formulation leads to an accurate solution to the problem. The variational estimate is an integral form, which can be computed using a computer software. As the variational estimate is an explicit formula, the solution is easy to compute. This is a great advantage of the variational estimate formulation.

  3. Control system estimation and design for aerospace vehicles with time delay

    NASA Technical Reports Server (NTRS)

    Allgaier, G. R.; Williams, T. L.

    1972-01-01

    The problems of estimation and control of discrete, linear, time-varying systems are considered. Previous solutions to these problems involved either approximate techniques, open-loop control solutions, or results which required excessive computation. The estimation problem is solved by two different methods, both of which yield the identical algorithm for determining the optimal filter. The partitioned results achieve a substantial reduction in computation time and storage requirements over the expanded solution, however. The results reduce to the Kalman filter when no delays are present in the system. The control problem is also solved by two different methods, both of which yield identical algorithms for determining the optimal control gains. The stochastic control is shown to be identical to the deterministic control, thus extending the separation principle to time delay systems. The results obtained reduce to the familiar optimal control solution when no time delays are present in the system.

  4. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  5. A Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) Quantitative Analysis Method Based on the Auto-Selection of an Internal Reference Line and Optimized Estimation of Plasma Temperature.

    PubMed

    Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong

    2018-01-01

    The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.

  6. How to apply the optimal estimation method to your lidar measurements for improved retrievals of temperature and composition

    NASA Astrophysics Data System (ADS)

    Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.

    2018-04-01

    The optimal estimation method (OEM) has a long history of use in passive remote sensing, but has only recently been applied to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first applying instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.

  7. Estimating contrast transfer function and associated parameters by constrained non-linear optimization.

    PubMed

    Yang, C; Jiang, W; Chen, D-H; Adiga, U; Ng, E G; Chiu, W

    2009-03-01

    The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.

  8. Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System

    PubMed Central

    Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei

    2018-01-01

    The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751

  9. Streamflow Prediction based on Chaos Theory

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, X.; Babovic, V. M.

    2015-12-01

    Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.

  10. Optimal reference polarization states for the calibration of general Stokes polarimeters in the presence of noise

    NASA Astrophysics Data System (ADS)

    Mu, Tingkui; Bao, Donghao; Zhang, Chunmin; Chen, Zeyu; Song, Jionghui

    2018-07-01

    During the calibration of the system matrix of a Stokes polarimeter using reference polarization states (RPSs) and pseudo-inversion estimation method, the measurement intensities are usually noised by the signal-independent additive Gaussian noise or signal-dependent Poisson shot noise, the precision of the estimated system matrix is degraded. In this paper, we present a paradigm for selecting RPSs to improve the precision of the estimated system matrix in the presence of both types of noise. The analytical solution of the precision of the system matrix estimated with the RPSs are derived. Experimental measurements from a general Stokes polarimeter show that accurate system matrix is estimated with the optimal RPSs, which are generated using two rotating quarter-wave plates. The advantage of using optimal RPSs is a reduction in measurement time with high calibration precision.

  11. Toward Improved Methods of Estimating Attenuation, Phase and Group velocity of surface waves observed on Shallow Seismic Records

    NASA Astrophysics Data System (ADS)

    Diallo, M. S.; Holschneider, M.; Kulesh, M.; Scherbaum, F.; Ohrnberger, M.; Lück, E.

    2004-05-01

    This contribution is concerned with the estimate of attenuation and dispersion characteristics of surface waves observed on a shallow seismic record. The analysis is based on a initial parameterization of the phase and attenuation functions which are then estimated by minimizing a properly defined merit function. To minimize the effect of random noise on the estimates of dispersion and attenuation we use cross-correlations (in Fourier domain) of preselected traces from some region of interest along the survey line. These cross-correlations are then expressed in terms of the parameterized attenuation and phase functions and the auto-correlation of the so-called source trace or reference trace. Cross-corelation that enter the optimization are selected so as to provide an average estimate of both the attenuation function and the phase (group) velocity of the area under investigation. The advantage of the method over the standard two stations method using Fourier technique is that uncertainties related to the phase unwrapping and the estimate of the number of 2π cycle skip in the phase phase are eliminated. However when mutliple modes arrival are observed, its become merely impossible to obtain reliable estimate the dipsersion curves for the different modes using optimization method alone. To circumvent this limitations we using the presented approach in conjunction with the wavelet propagation operator (Kulesh et al., 2003) which allows the application of band pass filtering in (ω -t) domain, to select a particular mode for the minimization. Also by expressing the cost function in the wavelet domain the optimization can be performed either with respect to the phase, the modulus of the transform or a combination of both. This flexibility in the design of the cost function provides an additional mean of constraining the optimization results. Results from the application of this dispersion and attenuation analysis method are shown for both synthetic and real 2D shallow seismic data sets. M. Kulesh, M. Holschneider, M. S. Diallo, Q. Xie and F. Scherbaum, Modeling of Wave Dispersion Using Wavelet Transfrom (Submitted to Pure and Applied Geophysics).

  12. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  13. A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.

    PubMed

    Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan

    2017-06-22

    Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.

  14. Control-enhanced multiparameter quantum estimation

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Yuan, Haidong

    2017-10-01

    Most studies in multiparameter estimation assume the dynamics is fixed and focus on identifying the optimal probe state and the optimal measurements. In practice, however, controls are usually available to alter the dynamics, which provides another degree of freedom. In this paper we employ optimal control methods, particularly the gradient ascent pulse engineering (GRAPE), to design optimal controls for the improvement of the precision limit in multiparameter estimation. We show that the controlled schemes are not only capable to provide a higher precision limit, but also have a higher stability to the inaccuracy of the time point performing the measurements. This high time stability will benefit the practical metrology, where it is hard to perform the measurement at a very accurate time point due to the response time of the measurement apparatus.

  15. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.

    PubMed

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.

  16. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  17. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  18. Non-Rigid Structure Estimation in Trajectory Space from Monocular Vision

    PubMed Central

    Wang, Yaming; Tong, Lingling; Jiang, Mingfeng; Zheng, Junbao

    2015-01-01

    In this paper, the problem of non-rigid structure estimation in trajectory space from monocular vision is investigated. Similar to the Point Trajectory Approach (PTA), based on characteristic points’ trajectories described by a predefined Discrete Cosine Transform (DCT) basis, the structure matrix was also calculated by using a factorization method. To further optimize the non-rigid structure estimation from monocular vision, the rank minimization problem about structure matrix is proposed to implement the non-rigid structure estimation by introducing the basic low-rank condition. Moreover, the Accelerated Proximal Gradient (APG) algorithm is proposed to solve the rank minimization problem, and the initial structure matrix calculated by the PTA method is optimized. The APG algorithm can converge to efficient solutions quickly and lessen the reconstruction error obviously. The reconstruction results of real image sequences indicate that the proposed approach runs reliably, and effectively improves the accuracy of non-rigid structure estimation from monocular vision. PMID:26473863

  19. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    NASA Astrophysics Data System (ADS)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  20. Method and system for diagnostics of apparatus

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry (Inventor)

    2012-01-01

    Proposed is a method, implemented in software, for estimating fault state of an apparatus outfitted with sensors. At each execution period the method processes sensor data from the apparatus to obtain a set of parity parameters, which are further used for estimating fault state. The estimation method formulates a convex optimization problem for each fault hypothesis and employs a convex solver to compute fault parameter estimates and fault likelihoods for each fault hypothesis. The highest likelihoods and corresponding parameter estimates are transmitted to a display device or an automated decision and control system. The obtained accurate estimate of fault state can be used to improve safety, performance, or maintenance processes for the apparatus.

  1. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    PubMed

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  2. New estimation architecture for multisensor data fusion

    NASA Astrophysics Data System (ADS)

    Covino, Joseph M.; Griffiths, Barry E.

    1991-07-01

    This paper describes a novel method of hierarchical asynchronous distributed filtering called the Net Information Approach (NIA). The NIA is a Kalman-filter-based estimation scheme for spatially distributed sensors which must retain their local optimality yet require a nearly optimal global estimate. The key idea of the NIA is that each local sensor-dedicated filter tells the global filter 'what I've learned since the last local-to-global transmission,' whereas in other estimation architectures the local-to-global transmission consists of 'what I think now.' An algorithm based on this idea has been demonstrated on a small-scale target-tracking problem with many encouraging results. Feasibility of this approach was demonstrated by comparing NIA performance to an optimal centralized Kalman filter (lower bound) via Monte Carlo simulations.

  3. Simulation-Based Joint Estimation of Body Deformation and Elasticity Parameters for Medical Image Analysis

    PubMed Central

    Foskey, Mark; Niethammer, Marc; Krajcevski, Pavel; Lin, Ming C.

    2014-01-01

    Estimation of tissue stiffness is an important means of noninvasive cancer detection. Existing elasticity reconstruction methods usually depend on a dense displacement field (inferred from ultrasound or MR images) and known external forces. Many imaging modalities, however, cannot provide details within an organ and therefore cannot provide such a displacement field. Furthermore, force exertion and measurement can be difficult for some internal organs, making boundary forces another missing parameter. We propose a general method for estimating elasticity and boundary forces automatically using an iterative optimization framework, given the desired (target) output surface. During the optimization, the input model is deformed by the simulator, and an objective function based on the distance between the deformed surface and the target surface is minimized numerically. The optimization framework does not depend on a particular simulation method and is therefore suitable for different physical models. We show a positive correlation between clinical prostate cancer stage (a clinical measure of severity) and the recovered elasticity of the organ. Since the surface correspondence is established, our method also provides a non-rigid image registration, where the quality of the deformation fields is guaranteed, as they are computed using a physics-based simulation. PMID:22893381

  4. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  5. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  6. Validation of a pair of computer codes for estimation and optimization of subsonic aerodynamic performance of simple hinged-flap systems for thin swept wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1988-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.

  7. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  8. Conditioning of Model Identification Task in Immune Inspired Optimizer SILO

    NASA Astrophysics Data System (ADS)

    Wojdan, K.; Swirski, K.; Warchol, M.; Maciorowski, M.

    2009-10-01

    Methods which provide good conditioning of model identification task in immune inspired, steady-state controller SILO (Stochastic Immune Layer Optimizer) are presented in this paper. These methods are implemented in a model based optimization algorithm. The first method uses a safe model to assure that gains of the process's model can be estimated. The second method is responsible for elimination of potential linear dependences between columns of observation matrix. Moreover new results from one of SILO implementation in polish power plant are presented. They confirm high efficiency of the presented solution in solving technical problems.

  9. Sparse Covariance Matrix Estimation With Eigenvalue Constraints

    PubMed Central

    LIU, Han; WANG, Lie; ZHAO, Tuo

    2014-01-01

    We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866

  10. Optimal accelerometer placement on a robot arm for pose estimation

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Sanford, Joseph D.; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Das, Sumit K.; Popa, Dan O.

    2017-05-01

    The performance of robots to carry out tasks depends in part on the sensor information they can utilize. Usually, robots are fitted with angle joint encoders that are used to estimate the position and orientation (or the pose) of its end-effector. However, there are numerous situations, such as in legged locomotion, mobile manipulation, or prosthetics, where such joint sensors may not be present at every, or any joint. In this paper we study the use of inertial sensors, in particular accelerometers, placed on the robot that can be used to estimate the robot pose. Studying accelerometer placement on a robot involves many parameters that affect the performance of the intended positioning task. Parameters such as the number of accelerometers, their size, geometric placement and Signal-to-Noise Ratio (SNR) are included in our study of their effects for robot pose estimation. Due to the ubiquitous availability of inexpensive accelerometers, we investigated pose estimation gains resulting from using increasingly large numbers of sensors. Monte-Carlo simulations are performed with a two-link robot arm to obtain the expected value of an estimation error metric for different accelerometer configurations, which are then compared for optimization. Results show that, with a fixed SNR model, the pose estimation error decreases with increasing number of accelerometers, whereas for a SNR model that scales inversely to the accelerometer footprint, the pose estimation error increases with the number of accelerometers. It is also shown that the optimal placement of the accelerometers depends on the method used for pose estimation. The findings suggest that an integration-based method favors placement of accelerometers at the extremities of the robot links, whereas a kinematic-constraints-based method favors a more uniformly distributed placement along the robot links.

  11. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.

    PubMed

    Frick, Eric; Rahmatalla, Salam

    2018-04-04

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.

  12. Generalized Likelihood Uncertainty Estimation (GLUE) methodology for optimization of extraction in natural products.

    PubMed

    Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H

    2018-06-01

    Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Immunity-Based Optimal Estimation Approach for a New Real Time Group Elevator Dynamic Control Application for Energy and Time Saving

    PubMed Central

    Baygin, Mehmet; Karakose, Mehmet

    2013-01-01

    Nowadays, the increasing use of group elevator control systems owing to increasing building heights makes the development of high-performance algorithms necessary in terms of time and energy saving. Although there are many studies in the literature about this topic, they are still not effective enough because they are not able to evaluate all features of system. In this paper, a new approach of immune system-based optimal estimate is studied for dynamic control of group elevator systems. The method is mainly based on estimation of optimal way by optimizing all calls with genetic, immune system and DNA computing algorithms, and it is evaluated with a fuzzy system. The system has a dynamic feature in terms of the situation of calls and the option of the most appropriate algorithm, and it also adaptively works in terms of parameters such as the number of floors and cabins. This new approach which provides both time and energy saving was carried out in real time. The experimental results comparatively demonstrate the effects of method. With dynamic and adaptive control approach in this study carried out, a significant progress on group elevator control systems has been achieved in terms of time and energy efficiency according to traditional methods. PMID:23935433

  14. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  15. Search-based optimization

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  16. Advanced methods of structural and trajectory analysis for transport aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1995-01-01

    This report summarizes the efforts in two areas: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of trajectory optimization. The majority of the effort was spent in the structural weight area. A draft of 'Analytical Fuselage and Wing Weight Estimation of Transport Aircraft', resulting from this research, is included as an appendix.

  17. Estimating leaf nitrogen accumulation in maize based on canopy hyperspectrum data

    NASA Astrophysics Data System (ADS)

    Gu, Xiaohe; Wang, Lizhi; Song, Xiaoyu; Xu, Xingang

    2016-10-01

    Leaf nitrogen accumulation (LNA) has important influence on the formation of crop yield and grain protein. Monitoring leaf nitrogen accumulation of crop canopy quantitively and real-timely is helpful for mastering crop nutrition status, diagnosing group growth and managing fertilization precisely. The study aimed to develop a universal method to monitor LNA of maize by hyperspectrum data, which could provide mechanism support for mapping LNA of maize at county scale. The correlations between LNA and hyperspectrum reflectivity and its mathematical transformations were analyzed. Then the feature bands and its transformations were screened to develop the optimal model of estimating LNA based on multiple linear regression method. The in-situ samples were used to evaluate the accuracy of the estimating model. Results showed that the estimating model with one differential logarithmic transformation (lgP') of reflectivity could reach highest correlation coefficient (0.889) with lowest RMSE (0.646 g·m-2), which was considered as the optimal model for estimating LNA in maize. The determination coefficient (R2) of testing samples was 0.831, while the RMSE was 1.901 g·m-2. It indicated that the one differential logarithmic transformation of hyperspectrum had good response with LNA of maize. Based on this transformation, the optimal estimating model of LNA could reach good accuracy with high stability.

  18. Evaluation of assigned-value uncertainty for complex calibrator value assignment processes: a prealbumin example.

    PubMed

    Middleton, John; Vaks, Jeffrey E

    2007-04-01

    Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.

  19. Comparison of Optimal Design Methods in Inverse Problems

    DTIC Science & Technology

    2011-05-11

    corresponding FIM can be estimated by F̂ (τ) = F̂ (τ, θ̂OLS) = (Σ̂ N (θ̂OLS)) −1. (13) The asymptotic standard errors are given by SEk (θ0) = √ (ΣN0 )kk, k...1, . . . , p. (14) These standard errors are estimated in practice (when θ0 and σ0 are not known) by SEk (θ̂OLS) = √ (Σ̂N (θ̂OLS))kk, k = 1... SEk (θ̂boot) = √ Cov(θ̂boot)kk. We will compare the optimal design methods using the standard errors resulting from the op- timal time points each

  20. On Inertial Body Tracking in the Presence of Model Calibration Errors

    PubMed Central

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-01-01

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266

  1. Evaluation and design of a rain gauge network using a statistical optimization method in a severe hydro-geological hazard prone area

    NASA Astrophysics Data System (ADS)

    Fattoruso, Grazia; Longobardi, Antonia; Pizzuti, Alfredo; Molinara, Mario; Marocco, Claudio; De Vito, Saverio; Tortorella, Francesco; Di Francia, Girolamo

    2017-06-01

    Rainfall data collection gathered in continuous by a distributed rain gauge network is instrumental to more effective hydro-geological risk forecasting and management services though the input estimated rainfall fields suffer from prediction uncertainty. Optimal rain gauge networks can generate accurate estimated rainfall fields. In this research work, a methodology has been investigated for evaluating an optimal rain gauges network aimed at robust hydrogeological hazard investigations. The rain gauges of the Sarno River basin (Southern Italy) has been evaluated by optimizing a two-objective function that maximizes the estimated accuracy and minimizes the total metering cost through the variance reduction algorithm along with the climatological variogram (time-invariant). This problem has been solved by using an enumerative search algorithm, evaluating the exact Pareto-front by an efficient computational time.

  2. Phase Helps Find Geometrically Optimal Gaits

    NASA Astrophysics Data System (ADS)

    Revzen, Shai; Hatton, Ross

    Geometric motion planning describes motions of animals and machines governed by g ˙ = gA (q) q ˙ - a connection A (.) relating shape q and shape velocity q ˙ to body frame velocity g-1 g ˙ ∈ se (3) . Measuring the entire connection over a multidimensional q is often unfeasible with current experimental methods. We show how using a phase estimator can make tractable measuring the local structure of the connection surrounding a periodic motion q (φ) driven by a phase φ ∈S1 . This approach reduces the complexity of the estimation problem by a factor of dimq . The results suggest that phase estimation can be combined with geometric optimization into an iterative gait optimization algorithm usable on experimental systems, or alternatively, to allow the geometric optimality of an observed gait to be detected. ARO W911NF-14-1-0573, NSF 1462555.

  3. Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.

    PubMed

    Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J

    2016-03-01

    To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.

  4. Two-step reconstruction method using global optimization and conjugate gradient for ultrasound-guided diffuse optical tomography.

    PubMed

    Tavakoli, Behnoosh; Zhu, Quing

    2013-01-01

    Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.

  5. Gaussian Decomposition of Laser Altimeter Waveforms

    NASA Technical Reports Server (NTRS)

    Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan

    1999-01-01

    We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.

  6. Optimal multi-type sensor placement for response and excitation reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, C. D.; Xu, Y. L.

    2016-01-01

    The need to perform dynamic response reconstruction always arises as the measurement of structural response is often limited to a few locations, especially for a large civil structure. Besides, it is usually very difficult, if not impossible, to measure external excitations under the operation condition of a structure. This study presents an algorithm for optimal placement of multi-type sensors, including strain gauges, displacement transducers and accelerometers, for the best reconstruction of responses of key structural components where there are no sensors installed and the best estimation of external excitations acting on the structure at the same time. The algorithm is developed in the framework of Kalman filter with unknown excitation, in which minimum-variance unbiased estimates of the generalized state of the structure and the external excitations are obtained by virtue of limited sensor measurements. The structural responses of key locations without sensors can then be reconstructed with the estimated generalized state and excitation. The asymptotic stability feature of the filter is utilized for optimal sensor placement. The number and spatial location of the multi-type sensors are determined by adding the optimal sensor which gains the maximal reduction of the estimation error of reconstructed responses. For the given mode number in response reconstruction and the given locations of external excitations, the optimal multi-sensor placement achieved by the proposed method is independent of the type and time evolution of external excitation. A simply-supported overhanging steel beam under multiple types of excitation is numerically studied to demonstrate the feasibility and superiority of the proposed method, and the experimental work is then carried out to testify the effectiveness of the proposed method.

  7. Spin Contamination Error in Optimized Geometry of Singlet Carbene (1A1) by Broken-Symmetry Method

    NASA Astrophysics Data System (ADS)

    Kitagawa, Yasutaka; Saito, Toru; Nakanishi, Yasuyuki; Kataoka, Yusuke; Matsui, Toru; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi

    2009-10-01

    Spin contamination errors of a broken-symmetry (BS) method in optimized structural parameters of the singlet methylene (1A1) molecule are quantitatively estimated for the Hartree-Fock (HF) method, post-HF methods (CID, CCD, MP2, MP3, MP4(SDQ)), and a hybrid DFT (B3LYP) method. For the purpose, the optimized geometry by the BS method is compared with that of an approximate spin projection (AP) method. The difference between the BS and the AP methods is about 10-20° in the HCH angle. In order to examine the basis set dependency of the spin contamination error, calculated results by STO-3G, 6-31G*, and 6-311++G** are compared. The error depends on the basis sets, but the tendencies of each method are classified into two types. Calculated energy splitting values between the triplet and the singlet states (ST gap) indicate that the contamination of the stable triplet state makes the BS singlet solution stable and the ST gap becomes small. The energy order of the spin contamination error in the ST gap is estimated to be 10-1 eV.

  8. Estimation of Gravity Parameters Related to Simple Geometrical Structures by Developing an Approach Based on Deconvolution and Linear Optimization Techniques

    NASA Astrophysics Data System (ADS)

    Asfahani, J.; Tlas, M.

    2015-10-01

    An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.

  9. Multi-Objective Optimization of an In situ Bioremediation Technology to Treat Perchlorate-Contaminated Groundwater

    EPA Science Inventory

    The presentation shows how a multi-objective optimization method is integrated into a transport simulator (MT3D) for estimating parameters and cost of in-situ bioremediation technology to treat perchlorate-contaminated groundwater.

  10. Bilinear modeling and nonlinear estimation

    NASA Technical Reports Server (NTRS)

    Dwyer, Thomas A. W., III; Karray, Fakhreddine; Bennett, William H.

    1989-01-01

    New methods are illustrated for online nonlinear estimation applied to the lateral deflection of an elastic beam on board measurements of angular rates and angular accelerations. The development of the filter equations, together with practical issues of their numerical solution as developed from global linearization by nonlinear output injection are contrasted with the usual method of the extended Kalman filter (EKF). It is shown how nonlinear estimation due to gyroscopic coupling can be implemented as an adaptive covariance filter using off-the-shelf Kalman filter algorithms. The effect of the global linearization by nonlinear output injection is to introduce a change of coordinates in which only the process noise covariance is to be updated in online implementation. This is in contrast to the computational approach which arises in EKF methods arising by local linearization with respect to the current conditional mean. Processing refinements for nonlinear estimation based on optimal, nonlinear interpolation between observations are also highlighted. In these methods the extrapolation of the process dynamics between measurement updates is obtained by replacing a transition matrix with an operator spline that is optimized off-line from responses to selected test inputs.

  11. Invisibility problem in acoustics, electromagnetism and heat transfer. Inverse design method

    NASA Astrophysics Data System (ADS)

    Alekseev, G.; Tokhtina, A.; Soboleva, O.

    2017-10-01

    Two approaches (direct design and inverse design methods) for solving problems of designing devices providing invisibility of material bodies of detection using different physical fields - electromagnetic, acoustic and static are discussed. The second method is applied for solving problems of designing cloaking devices for the 3D stationary thermal scattering model. Based on this method the design problems under study are reduced to respective control problems. The material parameters (radial and tangential heat conductivities) of the inhomogeneous anisotropic medium filling the thermal cloak and the density of auxiliary heat sources play the role of controls. A unique solvability of direct thermal scattering problem in the Sobolev space is proved and the new estimates of solutions are established. Using these results, the solvability of control problem is proved and the optimality system is derived. Based on analysis of optimality system, the stability estimates of optimal solutions are established and numerical algorithms for solving particular thermal cloaking problem are proposed.

  12. Improving Upon String Methods for Transition State Discovery.

    PubMed

    Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker

    2012-02-14

    Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.

  13. Optimal estimation of diffusion coefficients from single-particle trajectories

    NASA Astrophysics Data System (ADS)

    Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik

    2014-02-01

    How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far superior to commonly used methods based on measured mean squared displacements. In experimentally relevant parameter ranges, it also outperforms the analytically intractable and computationally more demanding maximum likelihood estimator (MLE). For the case of diffusion on a flexible and fluctuating substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate fluctuations in CVE. The resulting unbiased CVE is optimal also for short time series on a fluctuating substrate. We have applied our estimators to human 8-oxoguanine DNA glycolase proteins diffusing on flow-stretched DNA, a fluctuating substrate, and found that diffusion coefficients are severely overestimated if substrate fluctuations are not accounted for.

  14. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  15. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  16. Self-optimized construction of transition rate matrices from accelerated atomistic simulations with Bayesian uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Swinburne, Thomas D.; Perez, Danny

    2018-05-01

    A massively parallel method to build large transition rate matrices from temperature-accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher-scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature acceleration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.

  17. Attention control learning in the decision space using state estimation

    NASA Astrophysics Data System (ADS)

    Gharaee, Zahra; Fatehi, Alireza; Mirian, Maryam S.; Nili Ahmadabadi, Majid

    2016-05-01

    The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.

  18. Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations

    PubMed Central

    Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon

    2016-01-01

    Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices. PMID:27240364

  19. Estimation method for serial dilution experiments.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2014-12-01

    Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.

  20. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  1. Optimization-based scatter estimation using primary modulation for computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao

    Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less

  2. Artifacts in Digital Coincidence Timing

    PubMed Central

    Moses, W. W.; Peng, Q.

    2014-01-01

    Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into a time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator. All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e., the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the “optimal” method. The purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization. PMID:25321885

  3. A Comparison of Quantum and Molecular Mechanical Methods to Estimate Strain Energy in Druglike Fragments.

    PubMed

    Sellers, Benjamin D; James, Natalie C; Gobbi, Alberto

    2017-06-26

    Reducing internal strain energy in small molecules is critical for designing potent drugs. Quantum mechanical (QM) and molecular mechanical (MM) methods are often used to estimate these energies. In an effort to determine which methods offer an optimal balance in accuracy and performance, we have carried out torsion scan analyses on 62 fragments. We compared nine QM and four MM methods to reference energies calculated at a higher level of theory: CCSD(T)/CBS single point energies (coupled cluster with single, double, and perturbative triple excitations at the complete basis set limit) calculated on optimized geometries using MP2/6-311+G**. The results show that both the more recent MP2.X perturbation method as well as MP2/CBS perform quite well. In addition, combining a Hartree-Fock geometry optimization with a MP2/CBS single point energy calculation offers a fast and accurate compromise when dispersion is not a key energy component. Among MM methods, the OPLS3 force field accurately reproduces CCSD(T)/CBS torsion energies on more test cases than the MMFF94s or Amber12:EHT force fields, which struggle with aryl-amide and aryl-aryl torsions. Using experimental conformations from the Cambridge Structural Database, we highlight three example structures for which OPLS3 significantly overestimates the strain. The energies and conformations presented should enable scientists to estimate the expected error for the methods described and we hope will spur further research into QM and MM methods.

  4. A collimator optimization method for quantitative imaging: application to Y-90 bremsstrahlung SPECT.

    PubMed

    Rong, Xing; Frey, Eric C

    2013-08-01

    Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically optimized for such tasks. Optimizing a collimator for 90Y imaging is both novel and potentially important. Conventional optimization methods are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator optimization method for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed method to develop an optimal collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the optimization results for quantitative 90Y bremsstrahlung SPECT more general, the authors simulated multiple tumors of various sizes in the liver. The authors realistically simulated human anatomy using a digital phantom and the image formation process using a previously validated and computationally efficient method for modeling the image-degrading effects including object scatter, attenuation, and the full collimator-detector response (CDR). The scatter kernels and CDR function tables used in the modeling method were generated using a previously validated Monte Carlo simulation code. The hole length, hole diameter, and septal thickness of the obtained optimal collimator were 84, 3.5, and 1.4 mm, respectively. Compared to a commercial high-energy general-purpose collimator, the optimal collimator improved the resolution and FOM by 27% and 18%, respectively. The proposed collimator optimization method may be useful for improving quantitative SPECT imaging for radionuclides with complex energy spectra. The obtained optimal collimator provided a substantial improvement in quantitative performance for the microsphere radioembolization task considered.

  5. Real-Time Frequency Response Estimation Using Joined-Wing SensorCraft Aeroelastic Wind-Tunnel Data

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A; Heeg, Jennifer; Morelli, Eugene A

    2012-01-01

    A new method is presented for estimating frequency responses and their uncertainties from wind-tunnel data in real time. The method uses orthogonal phase-optimized multi- sine excitation inputs and a recursive Fourier transform with a least-squares estimator. The method was first demonstrated with an F-16 nonlinear flight simulation and results showed that accurate short period frequency responses were obtained within 10 seconds. The method was then applied to wind-tunnel data from a previous aeroelastic test of the Joined- Wing SensorCraft. Frequency responses describing bending strains from simultaneous control surface excitations were estimated in a time-efficient manner.

  6. Respiratory rate estimation from the built-in cameras of smartphones and tablets.

    PubMed

    Nam, Yunyoung; Lee, Jinseok; Chon, Ki H

    2014-04-01

    This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates. Overall, the VFCDM method provided the best results for accuracy (smaller median error), consistency (smaller interquartile range of the median value), and computational efficiency (less than 0.5 s on 1 min of data using a MATLAB implementation) to extract breathing rates that varied from 12 to 36 breaths/min. The AR method provided the least accurate respiratory rate estimation among the three methods. This work illustrates that both heart rates and normal breathing rates can be accurately derived from a video signal obtained from smartphones, an MP3 player and tablets with or without a flashlight.

  7. Flood frequency analysis using optimization techniques : final report.

    DOT National Transportation Integrated Search

    1992-10-01

    this study consists of three parts. In the first part, a comprehensive investigation was made to find an improved estimation method for the log-Pearson type 3 (LP3) distribution by using optimization techniques. Ninety sets of observed Louisiana floo...

  8. Locally adaptive methods for KDE-based random walk models of reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2017-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  9. Fusion of magnetometer and gradiometer sensors of MEG in the presence of multiplicative error.

    PubMed

    Mohseni, Hamid R; Woolrich, Mark W; Kringelbach, Morten L; Luckhoo, Henry; Smith, Penny Probert; Aziz, Tipu Z

    2012-07-01

    Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.

  10. Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching.

    PubMed

    Austin, Peter C

    2017-02-01

    Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term "bias due to incomplete matching" to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used.

  11. Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching

    PubMed Central

    2016-01-01

    Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term “bias due to incomplete matching” to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used. PMID:25038071

  12. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  13. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.

  14. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  15. The Community Cloud retrieval for CLimate (CC4CL) - Part 2: The optimal estimation approach

    NASA Astrophysics Data System (ADS)

    McGarragh, Gregory R.; Poulsen, Caroline A.; Thomas, Gareth E.; Povey, Adam C.; Sus, Oliver; Stapelberg, Stefan; Schlundt, Cornelia; Proud, Simon; Christensen, Matthew W.; Stengel, Martin; Hollmann, Rainer; Grainger, Roy G.

    2018-06-01

    The Community Cloud retrieval for Climate (CC4CL) is a cloud property retrieval system for satellite-based multispectral imagers and is an important component of the Cloud Climate Change Initiative (Cloud_cci) project. In this paper we discuss the optimal estimation retrieval of cloud optical thickness, effective radius and cloud top pressure based on the Optimal Retrieval of Aerosol and Cloud (ORAC) algorithm. Key to this method is the forward model, which includes the clear-sky model, the liquid water and ice cloud models, the surface model including a bidirectional reflectance distribution function (BRDF), and the "fast" radiative transfer solution (which includes a multiple scattering treatment). All of these components and their assumptions and limitations will be discussed in detail. The forward model provides the accuracy appropriate for our retrieval method. The errors are comparable to the instrument noise for cloud optical thicknesses greater than 10. At optical thicknesses less than 10 modeling errors become more significant. The retrieval method is then presented describing optimal estimation in general, the nonlinear inversion method employed, measurement and a priori inputs, the propagation of input uncertainties and the calculation of subsidiary quantities that are derived from the retrieval results. An evaluation of the retrieval was performed using measurements simulated with noise levels appropriate for the MODIS instrument. Results show errors less than 10 % for cloud optical thicknesses greater than 10. Results for clouds of optical thicknesses less than 10 have errors up to 20 %.

  16. Optimal estimation of recurrence structures from time series

    NASA Astrophysics Data System (ADS)

    beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel

    2016-05-01

    Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.

  17. Risk-Based Sampling: I Don't Want to Weight in Vain.

    PubMed

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.

  18. New Hybrid Algorithms for Estimating Tree Stem Diameters at Breast Height Using a Two Dimensional Terrestrial Laser Scanner

    PubMed Central

    Kong, Jianlei; Ding, Xiaokang; Liu, Jinhao; Yan, Lei; Wang, Jianli

    2015-01-01

    In this paper, a new algorithm to improve the accuracy of estimating diameter at breast height (DBH) for tree trunks in forest areas is proposed. First, the information is collected by a two-dimensional terrestrial laser scanner (2DTLS), which emits laser pulses to generate a point cloud. After extraction and filtration, the laser point clusters of the trunks are obtained, which are optimized by an arithmetic means method. Then, an algebraic circle fitting algorithm in polar form is non-linearly optimized by the Levenberg-Marquardt method to form a new hybrid algorithm, which is used to acquire the diameters and positions of the trees. Compared with previous works, this proposed method improves the accuracy of diameter estimation of trees significantly and effectively reduces the calculation time. Moreover, the experimental results indicate that this method is stable and suitable for the most challenging conditions, which has practical significance in improving the operating efficiency of forest harvester and reducing the risk of causing accidents. PMID:26147726

  19. [Application of ordinary Kriging method in entomologic ecology].

    PubMed

    Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong

    2003-01-01

    Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.

  20. CMB internal delensing with general optimal estimator for higher-order correlations

    DOE PAGES

    Namikawa, Toshiya

    2017-05-24

    We present here a new method for delensing B modes of the cosmic microwave background (CMB) using a lensing potential reconstructed from the same realization of the CMB polarization (CMB internal delensing). The B -mode delensing is required to improve sensitivity to primary B modes generated by, e.g., the inflationary gravitational waves, axionlike particles, modified gravity, primordial magnetic fields, and topological defects such as cosmic strings. However, the CMB internal delensing suffers from substantial biases due to correlations between observed CMB maps to be delensed and that used for reconstructing a lensing potential. Since the bias depends on realizations, wemore » construct a realization-dependent (RD) estimator for correcting these biases by deriving a general optimal estimator for higher-order correlations. The RD method is less sensitive to simulation uncertainties. Compared to the previous ℓ -splitting method, we find that the RD method corrects the biases without substantial degradation of the delensing efficiency.« less

  1. Application of the Optimized Summed Scored Attributes Method to Sex Estimation in Asian Crania.

    PubMed

    Tallman, Sean D; Go, Matthew C

    2018-05-01

    The optimized summed scored attributes (OSSA) method was recently introduced and validated for nonmetric ancestry estimation between American Black and White individuals. The method proceeds by scoring, dichotomizing, and subsequently summing ordinal morphoscopic trait scores to maximize between-group differences. This study tests the applicability of the OSSA method for sex estimation using five cranial traits given the methodological similarities between classifying sex and ancestry. A large sample of documented crania from Japan and Thailand (n = 744 males, 320 females) are used to develop a heuristically selected OSSA sectioning point of ≤1 separating males and females. This sectioning point is validated using a holdout sample of Japanese, Thai, and Filipino (n = 178 males, 82 females) individuals. The results indicate a general correct classification rate of 82% using all five traits, and 81% when excluding the mental eminence. Designating an OSSA score of 2 as indeterminate is recommended. © 2017 American Academy of Forensic Sciences.

  2. Progress in navigation filter estimate fusion and its application to spacecraft rendezvous

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1994-01-01

    A new derivation of an algorithm which fuses the outputs of two Kalman filters is presented within the context of previous research in this field. Unlike other works, this derivation clearly shows the combination of estimates to be optimal, minimizing the trace of the fused covariance matrix. The algorithm assumes that the filters use identical models, and are stable and operating optimally with respect to their own local measurements. Evidence is presented which indicates that the error ellipsoid derived from the covariance of the optimally fused estimate is contained within the intersections of the error ellipsoids of the two filters being fused. Modifications which reduce the algorithm's data transmission requirements are also presented, including a scalar gain approximation, a cross-covariance update formula which employs only the two contributing filters' autocovariances, and a form of the algorithm which can be used to reinitialize the two Kalman filters. A sufficient condition for using the optimally fused estimates to periodically reinitialize the Kalman filters in this fashion is presented and proved as a theorem. When these results are applied to an optimal spacecraft rendezvous problem, simulated performance results indicate that the use of optimally fused data leads to significantly improved robustness to initial target vehicle state errors. The following applications of estimate fusion methods to spacecraft rendezvous are also described: state vector differencing, and redundancy management.

  3. Optimal design and use of retry in fault tolerant real-time computer systems

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; Shin, K. G.

    1983-01-01

    A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.

  4. Rocket Based Combined Cycle Exchange Inlet Performance Estimation at Supersonic Speeds

    NASA Astrophysics Data System (ADS)

    Murzionak, Aliaksandr

    A method to estimate the performance of an exchange inlet for a Rocket Based Combined Cycle engine is developed. This method is to be used for exchange inlet geometry optimization and as such should be able to predict properties that can be used in the design process within a reasonable amount of time to allow multiple configurations to be evaluated. The method is based on a curve fit of the shocks developed around the major components of the inlet using solutions for shocks around sharp cones and 2D estimations of the shocks around wedges with blunt leading edges. The total pressure drop across the estimated shocks as well as the mass flow rate through the exchange inlet are calculated. The estimations for a selected range of free-stream Mach numbers between 1.1 and 7 are compared against numerical finite volume method simulations which were performed using available commercial software (Ansys-CFX). The total pressure difference between the two methods is within 10% for the tested Mach numbers of 5 and below, while for the Mach 7 test case the difference is 30%. The mass flow rate on average differs by less than 5% for all tested cases with the maximum difference not exceeding 10%. The estimation method takes less than 3 seconds on 3.0 GHz single core processor to complete the calculations for a single flight condition as oppose to over 5 days on 8 cores at 2.4 GHz system while using 3D finite volume method simulation with 1.5 million elements mesh. This makes the estimation method suitable for the use with exchange inlet geometry optimization algorithm.

  5. Bounding the moment deficit rate on crustal faults using geodetic data: Methods

    DOE PAGES

    Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael

    2017-08-19

    Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less

  6. Bounding the moment deficit rate on crustal faults using geodetic data: Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael

    Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less

  7. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.

  8. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  9. Optimization under variability and uncertainty: a case study for NOx emissions control for a gasification system.

    PubMed

    Chen, Jianjun; Frey, H Christopher

    2004-12-15

    Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.

  10. The estimation of dynamic contact angle of ultra-hydrophobic surfaces using inclined surface and impinging droplet methods

    NASA Astrophysics Data System (ADS)

    Jasikova, Darina; Kotek, Michal

    2014-03-01

    The development of industrial technology also brings with optimized surface quality, particularly where there is contact with food. Application ultra-hydrophobic surface significantly reduces the growth of bacteria and facilitates cleaning processes. Testing and evaluation of surface quality are used two methods: impinging droplet and inclined surface method optimized with high speed shadowgraphy, which give information about dynamic contact angle. This article presents the results of research into new methods of measuring ultra-hydrophobic patented technology.

  11. Practical state of health estimation of power batteries based on Delphi method and grey relational grade analysis

    NASA Astrophysics Data System (ADS)

    Sun, Bingxiang; Jiang, Jiuchun; Zheng, Fangdan; Zhao, Wei; Liaw, Bor Yann; Ruan, Haijun; Han, Zhiqiang; Zhang, Weige

    2015-05-01

    The state of health (SOH) estimation is very critical to battery management system to ensure the safety and reliability of EV battery operation. Here, we used a unique hybrid approach to enable complex SOH estimations. The approach hybridizes the Delphi method known for its simplicity and effectiveness in applying weighting factors for complicated decision-making and the grey relational grade analysis (GRGA) for multi-factor optimization. Six critical factors were used in the consideration for SOH estimation: peak power at 30% state-of-charge (SOC), capacity, the voltage drop at 30% SOC with a C/3 pulse, the temperature rises at the end of discharge and charge at 1C; respectively, and the open circuit voltage at the end of charge after 1-h rest. The weighting of these factors for SOH estimation was scored by the 'experts' in the Delphi method, indicating the influencing power of each factor on SOH. The parameters for these factors expressing the battery state variations are optimized by GRGA. Eight battery cells were used to illustrate the principle and methodology to estimate the SOH by this hybrid approach, and the results were compared with those based on capacity and power capability. The contrast among different SOH estimations is discussed.

  12. Time-Varying Delay Estimation Applied to the Surface Electromyography Signals Using the Parametric Approach

    NASA Astrophysics Data System (ADS)

    Luu, Gia Thien; Boualem, Abdelbassit; Duy, Tran Trung; Ravier, Philippe; Butteli, Olivier

    Muscle Fiber Conduction Velocity (MFCV) can be calculated from the time delay between the surface electromyographic (sEMG) signals recorded by electrodes aligned with the fiber direction. In order to take into account the non-stationarity during the dynamic contraction (the most daily life situation) of the data, the developed methods have to consider that the MFCV changes over time, which induces time-varying delays and the data is non-stationary (change of Power Spectral Density (PSD)). In this paper, the problem of TVD estimation is considered using a parametric method. First, the polynomial model of TVD has been proposed. Then, the TVD model parameters are estimated by using a maximum likelihood estimation (MLE) strategy solved by a deterministic optimization technique (Newton) and stochastic optimization technique, called simulated annealing (SA). The performance of the two techniques is also compared. We also derive two appropriate Cramer-Rao Lower Bounds (CRLB) for the estimated TVD model parameters and for the TVD waveforms. Monte-Carlo simulation results show that the estimation of both the model parameters and the TVD function is unbiased and that the variance obtained is close to the derived CRBs. A comparison with non-parametric approaches of the TVD estimation is also presented and shows the superiority of the method proposed.

  13. Statistically optimal analysis of state-discretized trajectory data from multiple thermodynamic states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hao; Mey, Antonia S. J. S.; Noé, Frank

    2014-12-07

    We propose a discrete transition-based reweighting analysis method (dTRAM) for analyzing configuration-space-discretized simulation trajectories produced at different thermodynamic states (temperatures, Hamiltonians, etc.) dTRAM provides maximum-likelihood estimates of stationary quantities (probabilities, free energies, expectation values) at any thermodynamic state. In contrast to the weighted histogram analysis method (WHAM), dTRAM does not require data to be sampled from global equilibrium, and can thus produce superior estimates for enhanced sampling data such as parallel/simulated tempering, replica exchange, umbrella sampling, or metadynamics. In addition, dTRAM provides optimal estimates of Markov state models (MSMs) from the discretized state-space trajectories at all thermodynamic states. Under suitablemore » conditions, these MSMs can be used to calculate kinetic quantities (e.g., rates, timescales). In the limit of a single thermodynamic state, dTRAM estimates a maximum likelihood reversible MSM, while in the limit of uncorrelated sampling data, dTRAM is identical to WHAM. dTRAM is thus a generalization to both estimators.« less

  14. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  15. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  16. Estimating of aquifer parameters from the single-well water-level measurements in response to advancing longwall mine by using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Buyuk, Ersin; Karaman, Abdullah

    2017-04-01

    We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.

  17. Performance of local optimization in single-plane fluoroscopic analysis for total knee arthroplasty.

    PubMed

    Prins, A H; Kaptein, B L; Stoel, B C; Lahaye, D J P; Valstar, E R

    2015-11-05

    Fluoroscopy-derived joint kinematics plays an important role in the evaluation of knee prostheses. Fluoroscopic analysis requires estimation of the 3D prosthesis pose from its 2D silhouette in the fluoroscopic image, by optimizing a dissimilarity measure. Currently, extensive user-interaction is needed, which makes analysis labor-intensive and operator-dependent. The aim of this study was to review five optimization methods for 3D pose estimation and to assess their performance in finding the correct solution. Two derivative-free optimizers (DHSAnn and IIPM) and three gradient-based optimizers (LevMar, DoNLP2 and IpOpt) were evaluated. For the latter three optimizers two different implementations were evaluated: one with a numerically approximated gradient and one with an analytically derived gradient for computational efficiency. On phantom data, all methods were able to find the 3D pose within 1mm and 1° in more than 85% of cases. IpOpt had the highest success-rate: 97%. On clinical data, the success rates were higher than 85% for the in-plane positions, but not for the rotations. IpOpt was the most expensive method and the application of an analytically derived gradients accelerated the gradient-based methods by a factor 3-4 without any differences in success rate. In conclusion, 85% of the frames can be analyzed automatically in clinical data and only 15% of the frames require manual supervision. The optimal success-rate on phantom data (97% with IpOpt) on phantom data indicates that even less supervision may become feasible. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Kenny S K; Lee, Louis K Y; Xing, L

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugano, Yasutaka; Mizuta, Masahiro; Takao, Seishin

    Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of themore » tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.« less

  20. Method for using global optimization to the estimation of surface-consistent residual statics

    DOEpatents

    Reister, David B.; Barhen, Jacob; Oblow, Edward M.

    2001-01-01

    An efficient method for generating residual statics corrections to compensate for surface-consistent static time shifts in stacked seismic traces. The method includes a step of framing the residual static corrections as a global optimization problem in a parameter space. The method also includes decoupling the global optimization problem involving all seismic traces into several one-dimensional problems. The method further utilizes a Stochastic Pijavskij Tunneling search to eliminate regions in the parameter space where a global minimum is unlikely to exist so that the global minimum may be quickly discovered. The method finds the residual statics corrections by maximizing the total stack power. The stack power is a measure of seismic energy transferred from energy sources to receivers.

  1. Airborne data measurement system errors reduction through state estimation and control optimization

    NASA Astrophysics Data System (ADS)

    Sebryakov, G. G.; Muzhichek, S. M.; Pavlov, V. I.; Ermolin, O. V.; Skrinnikov, A. A.

    2018-02-01

    The paper discusses the problem of airborne data measurement system errors reduction through state estimation and control optimization. The approaches are proposed based on the methods of experiment design and the theory of systems with random abrupt structure variation. The paper considers various control criteria as applied to an aircraft data measurement system. The physics of criteria is explained, the mathematical description and the sequence of steps for each criterion application is shown. The formula is given for airborne data measurement system state vector posterior estimation based for systems with structure variations.

  2. Estimating the optimal dynamic antipsychotic treatment regime: Evidence from the sequential multiple assignment randomized CATIE Schizophrenia Study

    PubMed Central

    Shortreed, Susan M.; Moodie, Erica E. M.

    2012-01-01

    Summary Treatment of schizophrenia is notoriously difficult and typically requires personalized adaption of treatment due to lack of efficacy of treatment, poor adherence, or intolerable side effects. The Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE) Schizophrenia Study is a sequential multiple assignment randomized trial comparing the typical antipsychotic medication, perphenazine, to several newer atypical antipsychotics. This paper describes the marginal structural modeling method for estimating optimal dynamic treatment regimes and applies the approach to the CATIE Schizophrenia Study. Missing data and valid estimation of confidence intervals are also addressed. PMID:23087488

  3. Simulated maximum likelihood method for estimating kinetic rates in gene expression.

    PubMed

    Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin

    2007-01-01

    Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.

  4. SURE Estimates for a Heteroscedastic Hierarchical Model

    PubMed Central

    Xie, Xianchao; Kou, S. C.; Brown, Lawrence D.

    2014-01-01

    Hierarchical models are extensively studied and widely used in statistics and many other scientific areas. They provide an effective tool for combining information from similar resources and achieving partial pooling of inference. Since the seminal work by James and Stein (1961) and Stein (1962), shrinkage estimation has become one major focus for hierarchical models. For the homoscedastic normal model, it is well known that shrinkage estimators, especially the James-Stein estimator, have good risk properties. The heteroscedastic model, though more appropriate for practical applications, is less well studied, and it is unclear what types of shrinkage estimators are superior in terms of the risk. We propose in this paper a class of shrinkage estimators based on Stein’s unbiased estimate of risk (SURE). We study asymptotic properties of various common estimators as the number of means to be estimated grows (p → ∞). We establish the asymptotic optimality property for the SURE estimators. We then extend our construction to create a class of semi-parametric shrinkage estimators and establish corresponding asymptotic optimality results. We emphasize that though the form of our SURE estimators is partially obtained through a normal model at the sampling level, their optimality properties do not heavily depend on such distributional assumptions. We apply the methods to two real data sets and obtain encouraging results. PMID:25301976

  5. Approximation of optimal filter for Ornstein-Uhlenbeck process with quantised discrete-time observation

    NASA Astrophysics Data System (ADS)

    Bania, Piotr; Baranowski, Jerzy

    2018-02-01

    Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.

  6. On Consistency Test Method of Expert Opinion in Ecological Security Assessment

    PubMed Central

    Wang, Lihong

    2017-01-01

    To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert’s individual judgment level, ability and the consistency of the expert’s overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment. PMID:28869570

  7. On Consistency Test Method of Expert Opinion in Ecological Security Assessment.

    PubMed

    Gong, Zaiwu; Wang, Lihong

    2017-09-04

    To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert's individual judgment level, ability and the consistency of the expert's overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment.

  8. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  9. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  10. Convex Banding of the Covariance Matrix

    PubMed Central

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189

  11. Convex Banding of the Covariance Matrix.

    PubMed

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  12. Estimating linear-nonlinear models using Rényi divergences

    PubMed Central

    Kouh, Minjoon; Sharpee, Tatyana O.

    2009-01-01

    This paper compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramér-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data. PMID:19568981

  13. Estimating linear-nonlinear models using Renyi divergences.

    PubMed

    Kouh, Minjoon; Sharpee, Tatyana O

    2009-01-01

    This article compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramer-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data.

  14. A novel approach of battery pack state of health estimation using artificial intelligence optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Wang, Yujie; Liu, Chang; Chen, Zonghai

    2018-02-01

    An accurate battery pack state of health (SOH) estimation is important to characterize the dynamic responses of battery pack and ensure the battery work with safety and reliability. However, the different performances in battery discharge/charge characteristics and working conditions in battery pack make the battery pack SOH estimation difficult. In this paper, the battery pack SOH is defined as the change of battery pack maximum energy storage. It contains all the cells' information including battery capacity, the relationship between state of charge (SOC) and open circuit voltage (OCV), and battery inconsistency. To predict the battery pack SOH, the method of particle swarm optimization-genetic algorithm is applied in battery pack model parameters identification. Based on the results, a particle filter is employed in battery SOC and OCV estimation to avoid the noise influence occurring in battery terminal voltage measurement and current drift. Moreover, a recursive least square method is used to update cells' capacity. Finally, the proposed method is verified by the profiles of New European Driving Cycle and dynamic test profiles. The experimental results indicate that the proposed method can estimate the battery states with high accuracy for actual operation. In addition, the factors affecting the change of SOH is analyzed.

  15. Improvement of the fringe analysis algorithm for wavelength scanning interferometry based on filter parameter optimization.

    PubMed

    Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian

    2018-03-20

    The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.

  16. Inference of Vohradský's Models of Genetic Networks by Solving Two-Dimensional Function Optimization Problems

    PubMed Central

    Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko

    2013-01-01

    The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175

  17. Robust and transferable quantification of NMR spectral quality using IROC analysis

    NASA Astrophysics Data System (ADS)

    Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.

    2017-12-01

    Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.

  18. Using spatial information about recurrence risk for robust optimization of dose-painting prescription functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Edward T.

    Purpose: To develop a robust method for deriving dose-painting prescription functions using spatial information about the risk for disease recurrence. Methods: Spatial distributions of radiobiological model parameters are derived from distributions of recurrence risk after uniform irradiation. These model parameters are then used to derive optimal dose-painting prescription functions given a constant mean biologically effective dose. Results: An estimate for the optimal dose distribution can be derived based on spatial information about recurrence risk. Dose painting based on imaging markers that are moderately or poorly correlated with recurrence risk are predicted to potentially result in inferior disease control when comparedmore » the same mean biologically effective dose delivered uniformly. A robust optimization approach may partially mitigate this issue. Conclusions: The methods described here can be used to derive an estimate for a robust, patient-specific prescription function for use in dose painting. Two approximate scaling relationships were observed: First, the optimal choice for the maximum dose differential when using either a linear or two-compartment prescription function is proportional to R, where R is the Pearson correlation coefficient between a given imaging marker and recurrence risk after uniform irradiation. Second, the predicted maximum possible gain in tumor control probability for any robust optimization technique is nearly proportional to the square of R.« less

  19. Use of personalized Dynamic Treatment Regimes (DTRs) and Sequential Multiple Assignment Randomized Trials (SMARTs) in mental health studies

    PubMed Central

    Liu, Ying; ZENG, Donglin; WANG, Yuanjia

    2014-01-01

    Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116

  20. The Event Detection and the Apparent Velocity Estimation Based on Computer Vision

    NASA Astrophysics Data System (ADS)

    Shimojo, M.

    2012-08-01

    The high spatial and time resolution data obtained by the telescopes aboard Hinode revealed the new interesting dynamics in solar atmosphere. In order to detect such events and estimate the velocity of dynamics automatically, we examined the estimation methods of the optical flow based on the OpenCV that is the computer vision library. We applied the methods to the prominence eruption observed by NoRH, and the polar X-ray jet observed by XRT. As a result, it is clear that the methods work well for solar images if the images are optimized for the methods. It indicates that the optical flow estimation methods in the OpenCV library are very useful to analyze the solar phenomena.

  1. Researches of fruit quality prediction model based on near infrared spectrum

    NASA Astrophysics Data System (ADS)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  2. Optimal causal inference: estimating stored information and approximating causal architecture.

    PubMed

    Still, Susanne; Crutchfield, James P; Ellison, Christopher J

    2010-09-01

    We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.

  3. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  4. Disaster debris estimation using high-resolution polarimetric stereo-SAR

    NASA Astrophysics Data System (ADS)

    Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki

    2016-10-01

    This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.

  5. Application of mathematical model methods for optimization tasks in construction materials technology

    NASA Astrophysics Data System (ADS)

    Fomina, E. V.; Kozhukhova, N. I.; Sverguzova, S. V.; Fomin, A. E.

    2018-05-01

    In this paper, the regression equations method for design of construction material was studied. Regression and polynomial equations representing the correlation between the studied parameters were proposed. The logic design and software interface of the regression equations method focused on parameter optimization to provide the energy saving effect at the stage of autoclave aerated concrete design considering the replacement of traditionally used quartz sand by coal mining by-product such as argillite. The mathematical model represented by a quadric polynomial for the design of experiment was obtained using calculated and experimental data. This allowed the estimation of relationship between the composition and final properties of the aerated concrete. The surface response graphically presented in a nomogram allowed the estimation of concrete properties in response to variation of composition within the x-space. The optimal range of argillite content was obtained leading to a reduction of raw materials demand, development of target plastic strength of aerated concrete as well as a reduction of curing time before autoclave treatment. Generally, this method allows the design of autoclave aerated concrete with required performance without additional resource and time costs.

  6. Optimal joint detection and estimation that maximizes ROC-type curves

    PubMed Central

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.

    2017-01-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544

  7. Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.

    PubMed

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K

    2016-09-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.

  8. The Effect of Framing on Surrogate Optimism Bias: A Simulation Study

    PubMed Central

    Patel, Dev; Cohen, Elan D.; Barnato, Amber E.

    2016-01-01

    Purpose To explore the effect of emotion priming and physician communication behaviors on optimism bias. Materials and Methods We conducted a 5 × 2 between-subject randomized factorial experiment using a web-based interactive video designed to simulate a family meeting for a critically ill spouse/parent. Eligibility included age ≥ 35 and self-identifying as the surrogate for a spouse/parent. The primary outcome was the surrogate's election of code status. We defined optimism bias as the surrogate's estimate of prognosis with CPR > their recollection of the physician's estimate. Results 256/373 respondents (69%) logged-in and were randomized and 220 (86%) had non-missing data for prognosis. 67/220 (30%) overall, and 56/173 (32%) of those with an accurate recollection of the physician's estimate had optimism bias. Optimism bias correlated with choosing CPR (p<.001). Emotion priming (p=.397), physician attention to emotion (p=.537), and framing of CPR as the social norm (p=.884) did not affect optimism bias. Framing the decision as the patient's vs. the surrogate's (25% vs. 36%, p=.066) and describing the alternative to CPR as “allow natural death” instead of “do not resuscitate” (25% vs. 37%, p =.035) decreased optimism bias. Conclusions Framing of CPR choice during code status conversations may influence surrogates’ optimism bias. PMID:26796950

  9. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  10. A probabilistic method for the estimation of residual risk in donated blood.

    PubMed

    Bish, Ebru K; Ragavan, Prasanna K; Bish, Douglas R; Slonim, Anthony D; Stramer, Susan L

    2014-10-01

    The residual risk (RR) of transfusion-transmitted infections, including the human immunodeficiency virus and hepatitis B and C viruses, is typically estimated by the incidence[Formula: see text]window period model, which relies on the following restrictive assumptions: Each screening test, with probability 1, (1) detects an infected unit outside of the test's window period; (2) fails to detect an infected unit within the window period; and (3) correctly identifies an infection-free unit. These assumptions need not hold in practice due to random or systemic errors and individual variations in the window period. We develop a probability model that accurately estimates the RR by relaxing these assumptions, and quantify their impact using a published cost-effectiveness study and also within an optimization model. These assumptions lead to inaccurate estimates in cost-effectiveness studies and to sub-optimal solutions in the optimization model. The testing solution generated by the optimization model translates into fewer expected infections without an increase in the testing cost. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Research in navigation and optimization for space trajectories

    NASA Technical Reports Server (NTRS)

    Pines, S.; Kelley, H. J.

    1979-01-01

    Topics covered include: (1) initial Cartesian coordinates for rapid precision orbit prediction; (2) accelerating convergence in optimization methods using search routines by applying curvilinear projection ideas; (3) perturbation-magnitude control for difference-quotient estimation of derivatives; and (4) determining the accelerometer bias for in-orbit shuttle trajectories.

  12. A new framework for analysing automated acoustic species-detection data: occupancy estimation and optimization of recordings post-processing

    USGS Publications Warehouse

    Chambert, Thierry A.; Waddle, J. Hardin; Miller, David A.W.; Walls, Susan; Nichols, James D.

    2018-01-01

    The development and use of automated species-detection technologies, such as acoustic recorders, for monitoring wildlife are rapidly expanding. Automated classification algorithms provide a cost- and time-effective means to process information-rich data, but often at the cost of additional detection errors. Appropriate methods are necessary to analyse such data while dealing with the different types of detection errors.We developed a hierarchical modelling framework for estimating species occupancy from automated species-detection data. We explore design and optimization of data post-processing procedures to account for detection errors and generate accurate estimates. Our proposed method accounts for both imperfect detection and false positive errors and utilizes information about both occurrence and abundance of detections to improve estimation.Using simulations, we show that our method provides much more accurate estimates than models ignoring the abundance of detections. The same findings are reached when we apply the methods to two real datasets on North American frogs surveyed with acoustic recorders.When false positives occur, estimator accuracy can be improved when a subset of detections produced by the classification algorithm is post-validated by a human observer. We use simulations to investigate the relationship between accuracy and effort spent on post-validation, and found that very accurate occupancy estimates can be obtained with as little as 1% of data being validated.Automated monitoring of wildlife provides opportunity and challenges. Our methods for analysing automated species-detection data help to meet key challenges unique to these data and will prove useful for many wildlife monitoring programs.

  13. Analysis of an optimization-based atomistic-to-continuum coupling method for point defects

    DOE PAGES

    Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...

    2015-11-16

    Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.

  14. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  15. Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates

    NASA Astrophysics Data System (ADS)

    Ashton, G.; Prix, R.

    2018-05-01

    Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.

  16. Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image

    NASA Astrophysics Data System (ADS)

    Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren

    2012-01-01

    The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.

  17. Estimation of optimal educational cost per medical student.

    PubMed

    Yang, Eunbae B; Lee, Seunghee

    2009-09-01

    This study aims to estimate the optimal educational cost per medical student. A private medical college in Seoul was targeted by the study, and its 2006 learning environment and data from the 2003~2006 budget and settlement were carefully analyzed. Through interviews with 3 medical professors and 2 experts in the economics of education, the study attempted to establish the educational cost estimation model, which yields an empirically computed estimate of the optimal cost per student in medical college. The estimation model was based primarily upon the educational cost which consisted of direct educational costs (47.25%), support costs (36.44%), fixed asset purchases (11.18%) and costs for student affairs (5.14%). These results indicate that the optimal cost per student is approximately 20,367,000 won each semester; thus, training a doctor costs 162,936,000 won over 4 years. Consequently, we inferred that the tuition levels of a local medical college or professional medical graduate school cover one quarter or one-half of the per- student cost. The findings of this study do not necessarily imply an increase in medical college tuition; the estimation of the per-student cost for training to be a doctor is one matter, and the issue of who should bear this burden is another. For further study, we should consider the college type and its location for general application of the estimation method, in addition to living expenses and opportunity costs.

  18. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  19. Spectral gap optimization of order parameters for sampling complex molecular systems

    PubMed Central

    Tiwary, Pratyush; Berne, B. J.

    2016-01-01

    In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365

  20. Spatial Variability of Organic Carbon in a Fractured Mudstone and Its Effect on the Retention and Release of Trichloroethene (TCE)

    NASA Astrophysics Data System (ADS)

    Sole-Mari, G.; Fernandez-Garcia, D.

    2016-12-01

    Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.

  1. New closed-form approximation for skin chromophore mapping.

    PubMed

    Välisuo, Petri; Kaartinen, Ilkka; Tuchin, Valery; Alander, Jarmo

    2011-04-01

    The concentrations of blood and melanin in skin can be estimated based on the reflectance of light. Many models for this estimation have been built, such as Monte Carlo simulation, diffusion models, and the differential modified Beer-Lambert law. The optimization-based methods are too slow for chromophore mapping of high-resolution spectral images, and the differential modified Beer-Lambert is not often accurate enough. Optimal coefficients for the differential Beer-Lambert model are calculated by differentiating the diffusion model, optimized to the normal skin spectrum. The derivatives are then used in predicting the difference in chromophore concentrations from the difference in absorption spectra. The accuracy of the method is tested both computationally and experimentally using a Monte Carlo multilayer simulation model, and the data are measured from the palm of a hand during an Allen's test, which modulates the blood content of skin. The correlations of the given and predicted blood, melanin, and oxygen saturation levels are correspondingly r = 0.94, r = 0.99, and r = 0.73. The prediction of the concentrations for all pixels in a 1-megapixel image would take ∼ 20 min, which is orders of magnitude faster than the methods based on optimization during the prediction.

  2. Time-optimal control of the spacecraft trajectories in the Earth-Moon system

    NASA Astrophysics Data System (ADS)

    Starinova, O. L.; Fain, M. K.; Materova, I. L.

    2017-01-01

    This paper outlines the multiparametric optimization of the L1-L2 and L2-L1 missions in the Earth-Moon system using electric propulsion. The optimal control laws are obtained using the Fedorenko successful linearization method to estimate the derivatives and the gradient method to optimize the control laws. The study of the transfers is based on the restricted circular three-body problem. The mathematical model of the missions is described within the barycentric system of coordinates. The optimization criterion is the total flight time. The perturbation from the Earth, the Moon and the Sun are taking into account. The impact of the shaded areas, induced by the Earth and the Moon, is also accounted. As the results of the optimization we obtained optimal control laws, corresponding trajectories and minimal total flight times.

  3. A Practical Guide to Calibration of a GSSHA Hydrologic Model Using ERDC Automated Model Calibration Software - Effective and Efficient Stochastic Global Optimization

    DTIC Science & Technology

    2012-02-01

    parameter estimation method, but rather to carefully describe how to use the ERDC software implementation of MLSL that accommodates the PEST model...model independent LM method based parameter estimation software PEST (Doherty, 2004, 2007a, 2007b), which quantifies model to measure- ment misfit...et al. (2011) focused on one drawback associated with LM-based model independent parameter estimation as implemented in PEST ; viz., that it requires

  4. An Integrated Method Based on PSO and EDA for the Max-Cut Problem.

    PubMed

    Lin, Geng; Guan, Jian

    2016-01-01

    The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality.

  5. Tire-road friction estimation and traction control strategy for motorized electric vehicle.

    PubMed

    Jin, Li-Qiang; Ling, Mingze; Yue, Weiqiang

    2017-01-01

    In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS).

  6. Tire-road friction estimation and traction control strategy for motorized electric vehicle

    PubMed Central

    Jin, Li-Qiang; Yue, Weiqiang

    2017-01-01

    In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS). PMID:28662053

  7. New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration

    NASA Astrophysics Data System (ADS)

    Keshavarz, Kasra; Alizadeh, Hossein

    2017-04-01

    Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.

  8. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  9. Confidence bands for measured economically optimal nitrogen rates

    USDA-ARS?s Scientific Manuscript database

    While numerous researchers have computed economically optimal N rate (EONR) values from measured yield – N rate data, nearly all have neglected to compute or estimate the statistical reliability of these EONR values. In this study, a simple method for computing EONR and its confidence bands is descr...

  10. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  11. Prospective regularization design in prior-image-based reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2015-12-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.

  12. Hybrid Optimal Design of the Eco-Hydrological Wireless Sensor Network in the Middle Reach of the Heihe River Basin, China

    PubMed Central

    Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao

    2014-01-01

    The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables. PMID:25317762

  13. Hybrid optimal design of the eco-hydrological wireless sensor network in the middle reach of the Heihe River Basin, China.

    PubMed

    Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao

    2014-10-14

    The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables.

  14. Sulcal set optimization for cortical surface registration.

    PubMed

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  15. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  16. Determination of hyporheic travel time distributions and other parameters from concurrent conservative and reactive tracer tests by local-in-global optimization

    NASA Astrophysics Data System (ADS)

    Knapp, Julia L. A.; Cirpka, Olaf A.

    2017-06-01

    The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.

  17. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE PAGES

    Huang, Hongying; Chen, Zheng; Li, Jin; ...

    2016-08-23

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  18. Direct discontinuous Galerkin method and its variations for second order elliptic equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hongying; Chen, Zheng; Li, Jin

    In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less

  19. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  20. Motion prediction of a non-cooperative space target

    NASA Astrophysics Data System (ADS)

    Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan

    2018-01-01

    Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.

  1. Optimal External Wrench Distribution During a Multi-Contact Sit-to-Stand Task.

    PubMed

    Bonnet, Vincent; Azevedo-Coste, Christine; Robert, Thomas; Fraisse, Philippe; Venture, Gentiane

    2017-07-01

    This paper aims at developing and evaluating a new practical method for the real-time estimate of joint torques and external wrenches during multi-contact sit-to-stand (STS) task using kinematics data only. The proposed method allows also identifying subject specific body inertial segment parameters that are required to perform inverse dynamics. The identification phase is performed using simple and repeatable motions. Thanks to an accurately identified model the estimate of the total external wrench can be used as an input to solve an under-determined multi-contact problem. It is solved using a constrained quadratic optimization process minimizing a hybrid human-like energetic criterion. The weights of this hybrid cost function are adjusted and a sensitivity analysis is performed in order to reproduce robustly human external wrench distribution. The results showed that the proposed method could successfully estimate the external wrenches under buttocks, feet, and hands during STS tasks (RMS error lower than 20 N and 6 N.m). The simplicity and generalization abilities of the proposed method allow paving the way of future diagnosis solutions and rehabilitation applications, including in-home use.

  2. Fuel-Optimal Altitude Maintenance of Low-Earth-Orbit Spacecrafts by Combined Direct/Indirect Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Ha; Park, Chandeok; Park, Sang-Young

    2015-12-01

    This work presents fuel-optimal altitude maintenance of Low-Earth-Orbit (LEO) spacecrafts experiencing non-negligible air drag and J2 perturbation. A pseudospectral (direct) method is first applied to roughly estimate an optimal fuel consumption strategy, which is employed as an initial guess to precisely determine itself. Based on the physical specifications of KOrea Multi-Purpose SATellite-2 (KOMPSAT-2), a Korean artificial satellite, numerical simulations show that a satellite ascends with full thrust at the early stage of the maneuver period and then descends with null thrust. While the thrust profile is presumably bang-off, it is difficult to precisely determine the switching time by using a pseudospectral method only. This is expected, since the optimal switching epoch does not coincide with one of the collocation points prescribed by the pseudospectral method, in general. As an attempt to precisely determine the switching time and the associated optimal thrust history, a shooting (indirect) method is then employed with the initial guess being obtained through the pseudospectral method. This hybrid process allows the determination of the optimal fuel consumption for LEO spacecrafts and their thrust profiles efficiently and precisely.

  3. Estimating Most Productive Scale Size in Data Envelopment Analysis with Integer Value Data

    NASA Astrophysics Data System (ADS)

    Dwi Sari, Yunita; Angria S, Layla; Efendi, Syahril; Zarlis, Muhammad

    2018-01-01

    The most productive scale size (MPSS) is a measurement that states how resources should be organized and utilized to achieve optimal results. The most productive scale size (MPSS) can be used as a benchmark for the success of an industry or company in producing goods or services. To estimate the most productive scale size (MPSS), each decision making unit (DMU) should pay attention the level of input-output efficiency, by data envelopment analysis (DEA) method decision making unit (DMU) can identify units used as references that can help to find the cause and solution from inefficiencies can optimize productivity that main advantage in managerial applications. Therefore, data envelopment analysis (DEA) is chosen to estimating most productive scale size (MPSS) that will focus on the input of integer value data with the CCR model and the BCC model. The purpose of this research is to find the best solution for estimating most productive scale size (MPSS) with input of integer value data in data envelopment analysis (DEA) method.

  4. Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Sen, A. K.; Longman, R. W.

    2006-01-01

    An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.

  5. Adaptive Optimal Stochastic State Feedback Control of Resistive Wall Modes in Tokamaks

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Sen, A. K.; Longman, R. W.

    2007-06-01

    An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least square method with exponential forgetting factor and covariance resetting is used to identify the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.

  6. An adaptive finite element method for the inequality-constrained Reynolds equation

    NASA Astrophysics Data System (ADS)

    Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha

    2018-07-01

    We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.

  7. Calibration of Valiantzas' reference evapotranspiration equations for the Pilbara region, Western Australia

    NASA Astrophysics Data System (ADS)

    Ahooghalandari, Matin; Khiadani, Mehdi; Jahromi, Mina Esmi

    2017-05-01

    Reference evapotranspiration (ET0) is a critical component of water resources management and planning. Different methods have been developed to estimate ET0 with various required data. In this study, Hargreaves, Turc, Oudin, Copais, Abtew methods and three forms of Valiantzas' formulas, developed in recent years, were used to estimate ET0 for the Pilbara region of Western Australia. The estimated ET0 values from these methods were compared with those from the FAO-56 Penman-Monteith (PM) method. The results showed that the Copais methods and two of Valiantzas' equations, in their original forms, are suitable for estimating ET0 for the study area. A modification of Honey-Bee Mating Optimization (MHBMO) algorithm was further implemented, and three Valiantzas' equations for a region located in the southern hemisphere were calibrated.

  8. A maximum power point prediction method for group control of photovoltaic water pumping systems based on parameter identification

    NASA Astrophysics Data System (ADS)

    Chen, B.; Su, J. H.; Guo, L.; Chen, J.

    2017-06-01

    This paper puts forward a maximum power estimation method based on the photovoltaic array (PVA) model to solve the optimization problems about group control of the PV water pumping systems (PVWPS) at the maximum power point (MPP). This method uses the improved genetic algorithm (GA) for model parameters estimation and identification in view of multi P-V characteristic curves of a PVA model, and then corrects the identification results through least square method. On this basis, the irradiation level and operating temperature under any condition are able to estimate so an accurate PVA model is established and the MPP none-disturbance estimation is achieved. The simulation adopts the proposed GA to determine parameters, and the results verify the accuracy and practicability of the methods.

  9. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  10. Information operator approach applied to the retrieval of vertical distributions of atmospheric constituents from ground-based FTIR measurements

    NASA Astrophysics Data System (ADS)

    Senten, Cindy; de Mazière, Martine; Vanhaelewyn, Gauthier; Vigouroux, Corinne; Delmas, Robert

    2010-05-01

    The retrieval of information about the vertical distribution of an atmospheric absorber from high spectral resolution ground-based Fourier Transform infrared (FTIR) solar absorption spectra is an important issue in remote sensing. A frequently used technique at present is the optimal estimation method. This work introduces the application of an alternative method, namely the information operator approach (Doicu et al., 2007; Hoogen et al., 1999), for extracting the available information from such FTIR measurements. This approach has been implemented within the well-known retrieval code SFIT2, by adapting the optimal estimation method such as to take into account only the significant contributions to the solution. In particular, we demonstrate the feasibility of the method when applied to ground-based FTIR spectra taken at the southern (sub)tropical site Ile de La Réunion (21° S, 55° E) in 2007. A thorough comparison has been made between the retrieval results obtained with the original optimal estimation method and the ones obtained with the information operator approach, regarding profile and column stability, information content and corresponding full error budget evaluation. This has been done for the target species ozone (O3), methane (CH4), nitrous oxide (N2O), and carbon monoxide (CO). It is shown that the information operator approach performs well and is capable of achieving the same accuracy as optimal estimation, with a gain of stability and with the additional advantage of being less sensitive to the choice of a priori information as well as to the actual signal-to-noise ratio. Keywords: ground-based FTIR, solar absorption spectra, greenhouse gases, information operator approach References Doicu, A., Hilgers, S., von Bargen, A., Rozanov, A., Eichmann, K.-U., von Savigny, C., and Burrows, J.P.: Information operator approach and iterative regularization methods for atmospheric remote sensing, J. Quant. Spectrosc. Radiat. Transfer, 103, 340-350, 2007. Hoogen, R., Rozanov, V.V., and Burrows, J.P.: Ozone profiles from GOME satellite data: description and first validation, J. Geophys. Res., 104(D7), 8263-8280, 1999.

  11. A direct method for synthesizing low-order optimal feedback control laws with application to flutter suppression

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Newsom, J. R.; Abel, I.

    1980-01-01

    A direct method of synthesizing a low-order optimal feedback control law for a high order system is presented. A nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean square steady state responses and control inputs. The controller is shown to be equivalent to a partial state estimator. The method is applied to the problem of active flutter suppression. Numerical results are presented for a 20th order system representing an aeroelastic wind-tunnel wing model. Low-order controllers (fourth and sixth order) are compared with a full order (20th order) optimal controller and found to provide near optimal performance with adequate stability margins.

  12. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    PubMed

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Conditional nonlinear optimal perturbations based on the particle swarm optimization and their applications to the predictability problems

    NASA Astrophysics Data System (ADS)

    Zheng, Qin; Yang, Zubin; Sha, Jianxin; Yan, Jun

    2017-02-01

    In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies a certain constraint condition and causes the largest prediction error at the prediction time. The CNOP has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model in the prediction variable will lead to failure of the ADJ-CNOP method, and (2) when the objective function has multiple extreme values, ADJ-CNOP has a large probability of producing local CNOPs, hence making a false estimation of the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithm, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or when the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. In addition, to check the estimation accuracy of the LBMPT presented by PSO-CNOP and ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the one by ADJ-CNOP with the forecast time increasing.

  14. Bilateral step length estimation using a single inertial measurement unit attached to the pelvis

    PubMed Central

    2012-01-01

    Background The estimation of the spatio-temporal gait parameters is of primary importance in both physical activity monitoring and clinical contexts. A method for estimating step length bilaterally, during level walking, using a single inertial measurement unit (IMU) attached to the pelvis is proposed. In contrast to previous studies, based either on a simplified representation of the human gait mechanics or on a general linear regressive model, the proposed method estimates the step length directly from the integration of the acceleration along the direction of progression. Methods The IMU was placed at pelvis level fixed to the subject's belt on the right side. The method was validated using measurements from a stereo-photogrammetric system as a gold standard on nine subjects walking ten laps along a closed loop track of about 25 m, varying their speed. For each loop, only the IMU data recorded in a 4 m long portion of the track included in the calibrated volume of the SP system, were used for the analysis. The method takes advantage of the cyclic nature of gait and it requires an accurate determination of the foot contact instances. A combination of a Kalman filter and of an optimally filtered direct and reverse integration applied to the IMU signals formed a single novel method (Kalman and Optimally filtered Step length Estimation - KOSE method). A correction of the IMU displacement due to the pelvic rotation occurring in gait was implemented to estimate the step length and the traversed distance. Results The step length was estimated for all subjects with less than 3% error. Traversed distance was assessed with less than 2% error. Conclusions The proposed method provided estimates of step length and traversed distance more accurate than any other method applied to measurements obtained from a single IMU that can be found in the literature. In healthy subjects, it is reasonable to expect that, errors in traversed distance estimation during daily monitoring activity would be of the same order of magnitude of those presented. PMID:22316235

  15. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  16. Application of Adaptive DP-optimality to Design a Pilot Study for a Clotting Time Test for Enoxaparin.

    PubMed

    Gulati, Abhishek; Faed, James M; Isbister, Geoffrey K; Duffull, Stephen B

    2015-10-01

    Dosing of enoxaparin, like other anticoagulants, may result in bleeding following excessive doses and clot formation if the dose is too low. We recently showed that a factor Xa based clotting time test could potentially assess the effect of enoxaparin on the clotting system. However, the test did not perform well in subsequent individuals and effectiveness of an exogenous phospholipid, Actin FS, in reducing the variability in the clotting time was assessed. The aim of this work was to conduct an adaptive pilot study to determine the range of concentrations of Xa and Actin FS to take forward into a proof-of-concept study. A nonlinear parametric function was developed to describe the response surface over the factors of interest. An adaptive method was used to estimate the parameters using a D-optimal design criterion. In order to provide a reasonable probability of observing a success of the clotting time test, a P-optimal design criterion was incorporated using a loss function to describe the hybrid DP-optimality. The use of adaptive DP-optimality method resulted in an efficient estimation of model parameters using data from only 6 healthy volunteers. The use of response surface modelling identified a range of sets of Xa and Actin FS concentrations, any of which could be used for the proof-of-concept study. This study shows that parsimonious adaptive DP-optimal designs may provide both precise parameter estimates for response surface modelling as well as clinical confidence in the potential benefits of the study.

  17. Optimal Design for the Precise Estimation of an Interaction Threshold: The Impact of Exposure to a Mixture of 18 Polyhalogenated Aromatic Hydrocarbons

    PubMed Central

    Yeatts, Sharon D.; Gennings, Chris; Crofton, Kevin M.

    2014-01-01

    Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose-dependent interaction. However, the corresponding likelihood-ratio-based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds-optimal second-stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds-optimal second-stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice. PMID:22640366

  18. Robust Diagnosis Method Based on Parameter Estimation for an Interturn Short-Circuit Fault in Multipole PMSM under High-Speed Operation.

    PubMed

    Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo

    2015-11-20

    This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.

  19. Real-time localization of mobile device by filtering method for sensor fusion

    NASA Astrophysics Data System (ADS)

    Fuse, Takashi; Nagara, Keita

    2017-06-01

    Most of the applications with mobile devices require self-localization of the devices. GPS cannot be used in indoor environment, the positions of mobile devices are estimated autonomously by using IMU. Since the self-localization is based on IMU of low accuracy, and then the self-localization in indoor environment is still challenging. The selflocalization method using images have been developed, and the accuracy of the method is increasing. This paper develops the self-localization method without GPS in indoor environment by integrating sensors, such as IMU and cameras, on mobile devices simultaneously. The proposed method consists of observations, forecasting and filtering. The position and velocity of the mobile device are defined as a state vector. In the self-localization, observations correspond to observation data from IMU and camera (observation vector), forecasting to mobile device moving model (system model) and filtering to tracking method by inertial surveying and coplanarity condition and inverse depth model (observation model). Positions of a mobile device being tracked are estimated by system model (forecasting step), which are assumed as linearly moving model. Then estimated positions are optimized referring to the new observation data based on likelihood (filtering step). The optimization at filtering step corresponds to estimation of the maximum a posterior probability. Particle filter are utilized for the calculation through forecasting and filtering steps. The proposed method is applied to data acquired by mobile devices in indoor environment. Through the experiments, the high performance of the method is confirmed.

  20. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  1. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  3. Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, Part I: main content.

    PubMed

    Orellana, Liliana; Rotnitzky, Andrea; Robins, James M

    2010-01-01

    Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results.

  4. ModFOLD6: an accurate web server for the global and local quality estimation of 3D protein models.

    PubMed

    Maghrabi, Ali H A; McGuffin, Liam J

    2017-07-03

    Methods that reliably estimate the likely similarity between the predicted and native structures of proteins have become essential for driving the acceptance and adoption of three-dimensional protein models by life scientists. ModFOLD6 is the latest version of our leading resource for Estimates of Model Accuracy (EMA), which uses a pioneering hybrid quasi-single model approach. The ModFOLD6 server integrates scores from three pure-single model methods and three quasi-single model methods using a neural network to estimate local quality scores. Additionally, the server provides three options for producing global score estimates, depending on the requirements of the user: (i) ModFOLD6_rank, which is optimized for ranking/selection, (ii) ModFOLD6_cor, which is optimized for correlations of predicted and observed scores and (iii) ModFOLD6 global for balanced performance. The ModFOLD6 methods rank among the top few for EMA, according to independent blind testing by the CASP12 assessors. The ModFOLD6 server is also continuously automatically evaluated as part of the CAMEO project, where significant performance gains have been observed compared to our previous server and other publicly available servers. The ModFOLD6 server is freely available at: http://www.reading.ac.uk/bioinf/ModFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. A fast and objective multidimensional kernel density estimation method: fastKDE

    DOE PAGES

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...

    2016-03-07

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  6. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    PubMed

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  7. An automated method of tuning an attitude estimator

    NASA Technical Reports Server (NTRS)

    Mason, Paul A. C.; Mook, D. Joseph

    1995-01-01

    Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.

  8. Blind third-order dispersion estimation based on fractional Fourier transformation for coherent optical communication

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Guo, Peng; Yang, Aiying; Qiao, Yaojun

    2018-02-01

    In this paper, we propose a blind third-order dispersion estimation method based on fractional Fourier transformation (FrFT) in optical fiber communication system. By measuring the chromatic dispersion (CD) at different wavelengths, this method can estimation dispersion slope and further calculate the third-order dispersion. The simulation results demonstrate that the estimation error is less than 2 % in 28GBaud dual polarization quadrature phase-shift keying (DP-QPSK) and 28GBaud dual polarization 16 quadrature amplitude modulation (DP-16QAM) system. Through simulations, the proposed third-order dispersion estimation method is shown to be robust against nonlinear and amplified spontaneous emission (ASE) noise. In addition, to reduce the computational complexity, searching step with coarse and fine granularity is chosen to search optimal order of FrFT. The third-order dispersion estimation method based on FrFT can be used to monitor the third-order dispersion in optical fiber system.

  9. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  10. Fuel-optimal low-thrust formation reconfiguration via Radau pseudospectral method

    NASA Astrophysics Data System (ADS)

    Li, Jing

    2016-07-01

    This paper investigates fuel-optimal low-thrust formation reconfiguration near circular orbit. Based on the Clohessy-Wiltshire equations, first-order necessary optimality conditions are derived from the Pontryagin's maximum principle. The fuel-optimal impulsive solution is utilized to divide the low-thrust trajectory into thrust and coast arcs. By introducing the switching times as optimization variables, the fuel-optimal low-thrust formation reconfiguration is posed as a nonlinear programming problem (NLP) via direct transcription using multiple-phase Radau pseudospectral method (RPM), which is then solved by a sparse nonlinear optimization software SNOPT. To facilitate optimality verification and, if necessary, further refinement of the optimized solution of the NLP, formulas for mass costate estimation and initial costates scaling are presented. Numerical examples are given to show the application of the proposed optimization method. To fix the problem, generic fuel-optimal low-thrust formation reconfiguration can be simplified as reconfiguration without any initial and terminal coast arcs, whose optimal solutions can be efficiently obtained from the multiple-phase RPM at the cost of a slight fuel increment. Finally, influence of the specific impulse and maximum thrust magnitude on the fuel-optimal low-thrust formation reconfiguration is analyzed. Numerical results shown the links and differences between the fuel-optimal impulsive and low-thrust solutions.

  11. Novel trace chemical detection algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Raz, Gil; Murphy, Cara; Georgan, Chelsea; Greenwood, Ross; Prasanth, R. K.; Myers, Travis; Goyal, Anish; Kelley, David; Wood, Derek; Kotidis, Petros

    2017-05-01

    Algorithms for standoff detection and estimation of trace chemicals in hyperspectral images in the IR band are a key component for a variety of applications relevant to law-enforcement and the intelligence communities. Performance of these methods is impacted by the spectral signature variability due to presence of contaminants, surface roughness, nonlinear dependence on abundances as well as operational limitations on the compute platforms. In this work we provide a comparative performance and complexity analysis of several classes of algorithms as a function of noise levels, error distribution, scene complexity, and spatial degrees of freedom. The algorithm classes we analyze and test include adaptive cosine estimator (ACE and modifications to it), compressive/sparse methods, Bayesian estimation, and machine learning. We explicitly call out the conditions under which each algorithm class is optimal or near optimal as well as their built-in limitations and failure modes.

  12. Systematic parameter estimation in data-rich environments for cell signalling dynamics

    PubMed Central

    Nim, Tri Hieu; Luo, Le; Clément, Marie-Véronique; White, Jacob K.; Tucker-Kellogg, Lisa

    2013-01-01

    Motivation: Computational models of biological signalling networks, based on ordinary differential equations (ODEs), have generated many insights into cellular dynamics, but the model-building process typically requires estimating rate parameters based on experimentally observed concentrations. New proteomic methods can measure concentrations for all molecular species in a pathway; this creates a new opportunity to decompose the optimization of rate parameters. Results: In contrast with conventional parameter estimation methods that minimize the disagreement between simulated and observed concentrations, the SPEDRE method fits spline curves through observed concentration points, estimates derivatives and then matches the derivatives to the production and consumption of each species. This reformulation of the problem permits an extreme decomposition of the high-dimensional optimization into a product of low-dimensional factors, each factor enforcing the equality of one ODE at one time slice. Coarsely discretized solutions to the factors can be computed systematically. Then the discrete solutions are combined using loopy belief propagation, and refined using local optimization. SPEDRE has unique asymptotic behaviour with runtime polynomial in the number of molecules and timepoints, but exponential in the degree of the biochemical network. SPEDRE performance is comparatively evaluated on a novel model of Akt activation dynamics including redox-mediated inactivation of PTEN (phosphatase and tensin homologue). Availability and implementation: Web service, software and supplementary information are available at www.LtkLab.org/SPEDRE Supplementary information: Supplementary data are available at Bioinformatics online. Contact: LisaTK@nus.edu.sg PMID:23426255

  13. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    PubMed

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of those sequences that maximize likelihood under the Jukes-Cantor model is uninformative in the worst possible sense. For all inputs, all trees optimize the likelihood score. Second, we show that a greedy heuristic that uses GTR+Gamma ML to optimize the alignment and the tree can produce very poor alignments and trees. Therefore, the excellent performance of SATé-II and SATé-I is not because ML is used as an optimization criterion for choosing the best tree/alignment pair but rather due to the particular divide-and-conquer realignment techniques employed.

  14. A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.

    2017-12-01

    Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.

  15. An adjoint-based simultaneous estimation method of the asthenosphere's viscosity and afterslip using a fast and scalable finite-element adjoint solver

    NASA Astrophysics Data System (ADS)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo

    2018-04-01

    The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.

  16. Optimization of transfer trajectories to the Apophis asteroid for spacecraft with high and low thrust

    NASA Astrophysics Data System (ADS)

    Ivashkin, V. V.; Krylov, I. V.

    2014-03-01

    The problem of optimization of a spacecraft transfer to the Apophis asteroid is investigated. The scheme of transfer under analysis includes a geocentric stage of boosting the spacecraft with high thrust, a heliocentric stage of control by a low thrust engine, and a stage of deceleration with injection to an orbit of the asteroid's satellite. In doing this, the problem of optimal control is solved for cases of ideal and piecewise-constant low thrust, and the optimal magnitude and direction of spacecraft's hyperbolic velocity "at infinity" during departure from the Earth are determined. The spacecraft trajectories are found based on a specially developed comprehensive method of optimization. This method combines the method of dynamic programming at the first stage of analysis and the Pontryagin maximum principle at the concluding stage, together with the parameter continuation method. The estimates are obtained for the spacecraft's final mass and for the payload mass that can be delivered to the asteroid using the Soyuz-Fregat carrier launcher.

  17. Estimation of positive semidefinite correlation matrices by using convex quadratic semidefinite programming.

    PubMed

    Fushiki, Tadayoshi

    2009-07-01

    The correlation matrix is a fundamental statistic that is used in many fields. For example, GroupLens, a collaborative filtering system, uses the correlation between users for predictive purposes. Since the correlation is a natural similarity measure between users, the correlation matrix may be used in the Gram matrix in kernel methods. However, the estimated correlation matrix sometimes has a serious defect: although the correlation matrix is originally positive semidefinite, the estimated one may not be positive semidefinite when not all ratings are observed. To obtain a positive semidefinite correlation matrix, the nearest correlation matrix problem has recently been studied in the fields of numerical analysis and optimization. However, statistical properties are not explicitly used in such studies. To obtain a positive semidefinite correlation matrix, we assume the approximate model. By using the model, an estimate is obtained as the optimal point of an optimization problem formulated with information on the variances of the estimated correlation coefficients. The problem is solved by a convex quadratic semidefinite program. A penalized likelihood approach is also examined. The MovieLens data set is used to test our approach.

  18. A Full-Envelope Air Data Calibration and Three-Dimensional Wind Estimation Method Using Global Output-Error Optimization and Flight-Test Techniques

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.

    2012-01-01

    A novel, efficient air data calibration method is proposed for aircraft with limited envelopes. This method uses output-error optimization on three-dimensional inertial velocities to estimate calibration and wind parameters. Calibration parameters are based on assumed calibration models for static pressure, angle of attack, and flank angle. Estimated wind parameters are the north, east, and down components. The only assumptions needed for this method are that the inertial velocities and Euler angles are accurate, the calibration models are correct, and that the steady-state component of wind is constant throughout the maneuver. A two-minute maneuver was designed to excite the aircraft over the range of air data calibration parameters and de-correlate the angle-of-attack bias from the vertical component of wind. Simulation of the X-48B (The Boeing Company, Chicago, Illinois) aircraft was used to validate the method, ultimately using data derived from wind-tunnel testing to simulate the un-calibrated air data measurements. Results from the simulation were accurate and robust to turbulence levels comparable to those observed in flight. Future experiments are planned to evaluate the proposed air data calibration in a flight environment.

  19. Modeling and quantification of repolarization feature dependency on heart rate.

    PubMed

    Minchole, A; Zacur, E; Pueyo, E; Laguna, P

    2014-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.

  20. Sensitivity analysis of pars-tensa young's modulus estimation using inverse finite-element modeling

    NASA Astrophysics Data System (ADS)

    Rohani, S. Alireza; Elfarnawany, Mai; Agrawal, Sumit K.; Ladak, Hanif M.

    2018-05-01

    Accurate estimates of the pars-tensa (PT) Young's modulus (EPT) are required in finite-element (FE) modeling studies of the middle ear. Previously, we introduced an in-situ EPT estimation technique by optimizing a sample-specific FE model to match experimental eardrum pressurization data. This optimization process requires choosing some modeling assumptions such as PT thickness and boundary conditions. These assumptions are reported with a wide range of variation in the literature, hence affecting the reliability of the models. In addition, the sensitivity of the estimated EPT to FE modeling assumptions has not been studied. Therefore, the objective of this study is to identify the most influential modeling assumption on EPT estimates. The middle-ear cavity extracted from a cadaveric temporal bone was pressurized to 500 Pa. The deformed shape of the eardrum after pressurization was measured using a Fourier transform profilometer (FTP). A base-line FE model of the unpressurized middle ear was created. The EPT was estimated using golden section optimization method, which minimizes the cost function comparing the deformed FE model shape to the measured shape after pressurization. The effect of varying the modeling assumptions on EPT estimates were investigated. This included the change in PT thickness, pars flaccida Young's modulus and possible FTP measurement error. The most influential parameter on EPT estimation was PT thickness and the least influential parameter was pars flaccida Young's modulus. The results of this study provide insight into how different parameters affect the results of EPT optimization and which parameters' uncertainties require further investigation to develop robust estimation techniques.

  1. Optimal nonlinear filtering using the finite-volume method

    NASA Astrophysics Data System (ADS)

    Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.

    2018-01-01

    Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.

  2. Weighted Optimization-Based Distributed Kalman Filter for Nonlinear Target Tracking in Collaborative Sensor Networks.

    PubMed

    Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang

    2017-11-01

    The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.

  3. Application of nonlinear least-squares regression to ground-water flow modeling, west-central Florida

    USGS Publications Warehouse

    Yobbi, D.K.

    2000-01-01

    A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.

  4. Identification of dynamic systems, theory and formulation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1985-01-01

    The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.

  5. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  6. Adaptive hybrid optimal quantum control for imprecisely characterized systems.

    PubMed

    Egger, D J; Wilhelm, F K

    2014-06-20

    Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.

  7. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  8. Optimization of seasonal ARIMA models using differential evolution - simulated annealing (DESA) algorithm in forecasting dengue cases in Baguio City

    NASA Astrophysics Data System (ADS)

    Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.

    2016-10-01

    Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.

  9. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION

    PubMed Central

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    2016-01-01

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. PMID:26778864

  10. An Optimal Parameterization Framework for Infrasonic Tomography of the Stratospheric Winds Using Non-Local Sources

    DOE PAGES

    Blom, Philip Stephen; Marcillo, Omar Eduardo

    2016-12-05

    A method is developed to apply acoustic tomography methods to a localized network of infrasound arrays with intention of monitoring the atmosphere state in the region around the network using non-local sources without requiring knowledge of the precise source location or non-local atmosphere state. Closely spaced arrays provide a means to estimate phase velocities of signals that can provide limiting bounds on certain characteristics of the atmosphere. Larger spacing between such clusters provide a means to estimate celerity from propagation times along multiple unique stratospherically or thermospherically ducted propagation paths and compute more precise estimates of the atmosphere state. Inmore » order to avoid the commonly encountered complex, multimodal distributions for parametric atmosphere descriptions and to maximize the computational efficiency of the method, an optimal parametrization framework is constructed. This framework identifies the ideal combination of parameters for tomography studies in specific regions of the atmosphere and statistical model selection analysis shows that high quality corrections to the middle atmosphere winds can be obtained using as few as three parameters. Lastly, comparison of the resulting estimates for synthetic data sets shows qualitative agreement between the middle atmosphere winds and those estimated from infrasonic traveltime observations.« less

  11. Degradation trend estimation of slewing bearing based on LSSVM model

    NASA Astrophysics Data System (ADS)

    Lu, Chao; Chen, Jie; Hong, Rongjing; Feng, Yang; Li, Yuanyuan

    2016-08-01

    A novel prediction method is proposed based on least squares support vector machine (LSSVM) to estimate the slewing bearing's degradation trend with small sample data. This method chooses the vibration signal which contains rich state information as the object of the study. Principal component analysis (PCA) was applied to fuse multi-feature vectors which could reflect the health state of slewing bearing, such as root mean square, kurtosis, wavelet energy entropy, and intrinsic mode function (IMF) energy. The degradation indicator fused by PCA can reflect the degradation more comprehensively and effectively. Then the degradation trend of slewing bearing was predicted by using the LSSVM model optimized by particle swarm optimization (PSO). The proposed method was demonstrated to be more accurate and effective by the whole life experiment of slewing bearing. Therefore, it can be applied in engineering practice.

  12. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm

    NASA Astrophysics Data System (ADS)

    Godio, A.; Santilano, A.

    2018-01-01

    Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.

  13. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  14. Optimal stimulus scheduling for active estimation of evoked brain networks.

    PubMed

    Kafashan, MohammadMehdi; Ching, ShiNung

    2015-12-01

    We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.

  15. Optimal stimulus scheduling for active estimation of evoked brain networks

    NASA Astrophysics Data System (ADS)

    Kafashan, MohammadMehdi; Ching, ShiNung

    2015-12-01

    Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.

  16. Log sampling methods and software for stand and landscape analyses.

    Treesearch

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  17. Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones

    PubMed Central

    Wang, Zhen; Jin, Bingwen; Geng, Weidong

    2017-01-01

    The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765

  18. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    PubMed

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  19. Constraining Night Time Ecosystem Respiration by Inverse Approaches

    NASA Astrophysics Data System (ADS)

    Juang, J.; Stoy, P. C.; Siqueira, M. B.; Katul, G. G.

    2004-12-01

    Estimating nighttime ecosystem respiration remains a key challenge in quantifying ecosystem carbon budgets. Currently, nighttime eddy-covariance (EC) flux measurements are plagued by uncertainties often attributed to poor mixing within the canopy volume, non-turbulent transport of CO2 into and out of the canopy, and non-stationarity and intermittency. Here, we explore the use of second-order closure models to estimate nighttime ecosystem respiration by mathematically linking sources of CO2 to mean concentration profiles via the continuity and the CO2 flux budget equation modified to include thermal stratification. By forcing this model to match, in a root-mean squared sense, the nighttime measured mean CO2 concentration profiles within the canopy the above ground CO2 production and forest floor respiration can be estimated via multi-dimensional optimization techniques. We show that in a maturing pine and a mature hardwood forest, these optimized CO2 sources are (1) consistently larger than the eddy covariance flux measurements above the canopy, and (2) agree well with chamber-based measurements. We also show that by linking the optimized nighttime ecosystem respiration to temperature measurements, the estimated annual ecosystem respiration from this approach agrees well with biometric estimates, at least when compared to eddy-covariance methods conditioned on a friction velocity threshold. The difference between the annual ecosystem respiration obtained by this optimization method and the friction-velocity thresholded night-time EC fluxes can be as large as 700 g C m-2 (in 2003) for the maturing pine forest, which is about 40% of the ecosystem respiration. For 2001 and 2002, the annual ecosystem respiration differences between the EC-based and the proposed approach were on the order of 300 to 400 g C m-2.

  20. Multibody Kinematics Optimization for the Estimation of Upper and Lower Limb Human Joint Kinematics: A Systematized Methodological Review.

    PubMed

    Begon, Mickaël; Andersen, Michael Skipper; Dumas, Raphaël

    2018-03-01

    Multibody kinematics optimization (MKO) aims to reduce soft tissue artefact (STA) and is a key step in musculoskeletal modeling. The objective of this review was to identify the numerical methods, their validation and performance for the estimation of the human joint kinematics using MKO. Seventy-four papers were extracted from a systematized search in five databases and cross-referencing. Model-derived kinematics were obtained using either constrained optimization or Kalman filtering to minimize the difference between measured (i.e., by skin markers, electromagnetic or inertial sensors) and model-derived positions and/or orientations. While hinge, universal, and spherical joints prevail, advanced models (e.g., parallel and four-bar mechanisms, elastic joint) have been introduced, mainly for the knee and shoulder joints. Models and methods were evaluated using: (i) simulated data based, however, on oversimplified STA and joint models; (ii) reconstruction residual errors, ranging from 4 mm to 40 mm; (iii) sensitivity analyses which highlighted the effect (up to 36 deg and 12 mm) of model geometrical parameters, joint models, and computational methods; (iv) comparison with other approaches (i.e., single body kinematics optimization and nonoptimized kinematics); (v) repeatability studies that showed low intra- and inter-observer variability; and (vi) validation against ground-truth bone kinematics (with errors between 1 deg and 22 deg for tibiofemoral rotations and between 3 deg and 10 deg for glenohumeral rotations). Moreover, MKO was applied to various movements (e.g., walking, running, arm elevation). Additional validations, especially for the upper limb, should be undertaken and we recommend a more systematic approach for the evaluation of MKO. In addition, further model development, scaling, and personalization methods are required to better estimate the secondary degrees-of-freedom (DoF).

  1. Towards efficient multi-scale methods for monitoring sugarcane aphid infestations in sorghum

    USDA-ARS?s Scientific Manuscript database

    We discuss approaches and issues involved with developing optimal monitoring methods for sugarcane aphid infestations (SCA) in grain sorghum. We discuss development of sequential sampling methods that allow for estimation of the number of aphids per sample unit, and statistical decision making rela...

  2. An investigation of using an RQP based method to calculate parameter sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.

  3. Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2006-01-01

    Genetic and evolutionary algorithms have been applied to solve numerous problems in engineering design where they have been used primarily as optimization procedures. These methods have an advantage over conventional gradient-based search procedures became they are capable of finding global optima of multi-modal functions and searching design spaces with disjoint feasible regions. They are also robust in the presence of noisy data. Another desirable feature of these methods is that they can efficiently use distributed and parallel computing resources since multiple function evaluations (flow simulations in aerodynamics design) can be performed simultaneously and independently on ultiple processors. For these reasons genetic and evolutionary algorithms are being used more frequently in design optimization. Examples include airfoil and wing design and compressor and turbine airfoil design. They are also finding increasing use in multiple-objective and multidisciplinary optimization. This lecture will focus on an evolutionary method that is a relatively new member to the general class of evolutionary methods called differential evolution (DE). This method is easy to use and program and it requires relatively few user-specified constants. These constants are easily determined for a wide class of problems. Fine-tuning the constants will off course yield the solution to the optimization problem at hand more rapidly. DE can be efficiently implemented on parallel computers and can be used for continuous, discrete and mixed discrete/continuous optimization problems. It does not require the objective function to be continuous and is noise tolerant. DE and applications to single and multiple-objective optimization will be included in the presentation and lecture notes. A method for aerodynamic design optimization that is based on neural networks will also be included as a part of this lecture. The method offers advantages over traditional optimization methods. It is more flexible than other methods in dealing with design in the context of both steady and unsteady flows, partial and complete data sets, combined experimental and numerical data, inclusion of various constraints and rules of thumb, and other issues that characterize the aerodynamic design process. Neural networks provide a natural framework within which a succession of numerical solutions of increasing fidelity, incorporating more realistic flow physics, can be represented and utilized for optimization. Neural networks also offer an excellent framework for multiple-objective and multi-disciplinary design optimization. Simulation tools from various disciplines can be integrated within this framework and rapid trade-off studies involving one or many disciplines can be performed. The prospect of combining neural network based optimization methods and evolutionary algorithms to obtain a hybrid method with the best properties of both methods will be included in this presentation. Achieving solution diversity and accurate convergence to the exact Pareto front in multiple objective optimization usually requires a significant computational effort with evolutionary algorithms. In this lecture we will also explore the possibility of using neural networks to obtain estimates of the Pareto optimal front using non-dominated solutions generated by DE as training data. Neural network estimators have the potential advantage of reducing the number of function evaluations required to obtain solution accuracy and diversity, thus reducing cost to design.

  4. Replica approach to mean-variance portfolio optimization

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  5. The application of mean field theory to image motion estimation.

    PubMed

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  6. Towards using musculoskeletal models for intelligent control of physically assistive robots.

    PubMed

    Carmichael, Marc G; Liu, Dikai

    2011-01-01

    With the increasing number of robots being developed to physically assist humans in tasks such as rehabilitation and assistive living, more intelligent and personalized control systems are desired. In this paper we propose the use of a musculoskeletal model to estimate the strength of the user, from which information can be utilized to improve control schemes in which robots physically assist humans. An optimization model is developed utilizing a musculoskeletal model to estimate human strength in a specified dynamic state. Results of this optimization as well as methods of using it to observe muscle-based weaknesses in task space are presented. Lastly potential methods and problems in incorporating this model into a robot control system are discussed.

  7. A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong

    2001-01-01

    This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.

  8. Correlation techniques to determine model form in robust nonlinear system realization/identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1991-01-01

    The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  9. Modeling, estimation and identification methods for static shape determination of flexible structures. [for large space structure design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1986-01-01

    This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.

  10. Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution

    PubMed Central

    Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang

    2015-01-01

    Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233

  11. Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.

    PubMed

    Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang

    2015-10-01

    Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.

  12. Estimation of optimal pivot point for remote center of motion alignment in surgery.

    PubMed

    Rosa, Benoît; Gruijthuijsen, Caspar; Van Cleynenbreugel, Ben; Sloten, Jos Vander; Reynaerts, Dominiek; Poorten, Emmanuel Vander

    2015-02-01

    The determination of an optimal pivot point ([Formula: see text]) is important for instrument manipulation in minimally invasive surgery. Such knowledge is of particular importance for robotic-assisted surgery where robots need to rotate precisely around a specific point in space in order to minimize trauma to the body wall while maintaining position control. Remote center of motion (RCM) mechanisms are commonly used, where the RCM point is manually and visually aligned. If not positioned appropriately, this misalignment might lead to intolerably high forces on the body wall with increased risk of postoperative complications or instrument damage. An automated method to align the RCM with the [Formula: see text] was developed and tested. Computer vision and a lightweight calibration procedure are used to estimate the optimal pivot point. One or two pre-calibrated cameras viewing the surgical scene are employed. The surgeon is asked to make short pivoting movements, applying as little torque as possible, with an instrument of choice passing through the insertion point while camera images are being recorded. The physical properties of an instrument rotating around a pivot point are exploited in a random sample consensus scheme to robustly estimate the ideal position of the RCM in the image planes. Triangulation is used to estimate the RCM position in 3D. Experiments were performed on a specially designed mockup to test the method. The position of the pivot point is estimated with an average error less than 1.85 mm using two webcams placed from approximately 30 cm to 1 m away from the scene. The entire procedure was completed in a few seconds. In automated method to estimate the ideal position of the RCM was shown to be reliable. The method can be implemented within a visual servoing approach to automatically place the RCM point, or the results can be displayed on a screen to provide guidance to the surgeon. Further work includes the development of an image-guided alignment method and validation with in vivo experiments.

  13. Construction of joint confidence regions for the optimal true class fractions of Receiver Operating Characteristic (ROC) surfaces and manifolds.

    PubMed

    Bantis, Leonidas E; Nakas, Christos T; Reiser, Benjamin; Myall, Daniel; Dalrymple-Alford, John C

    2017-06-01

    The three-class approach is used for progressive disorders when clinicians and researchers want to diagnose or classify subjects as members of one of three ordered categories based on a continuous diagnostic marker. The decision thresholds or optimal cut-off points required for this classification are often chosen to maximize the generalized Youden index (Nakas et al., Stat Med 2013; 32: 995-1003). The effectiveness of these chosen cut-off points can be evaluated by estimating their corresponding true class fractions and their associated confidence regions. Recently, in the two-class case, parametric and non-parametric methods were investigated for the construction of confidence regions for the pair of the Youden-index-based optimal sensitivity and specificity fractions that can take into account the correlation introduced between sensitivity and specificity when the optimal cut-off point is estimated from the data (Bantis et al., Biomet 2014; 70: 212-223). A parametric approach based on the Box-Cox transformation to normality often works well while for markers having more complex distributions a non-parametric procedure using logspline density estimation can be used instead. The true class fractions that correspond to the optimal cut-off points estimated by the generalized Youden index are correlated similarly to the two-class case. In this article, we generalize these methods to the three- and to the general k-class case which involves the classification of subjects into three or more ordered categories, where ROC surface or ROC manifold methodology, respectively, is typically employed for the evaluation of the discriminatory capacity of a diagnostic marker. We obtain three- and multi-dimensional joint confidence regions for the optimal true class fractions. We illustrate this with an application to the Trail Making Test Part A that has been used to characterize cognitive impairment in patients with Parkinson's disease.

  14. A neuro-data envelopment analysis approach for optimization of uncorrelated multiple response problems with smaller the better type controllable factors

    NASA Astrophysics Data System (ADS)

    Bashiri, Mahdi; Farshbaf-Geranmayeh, Amir; Mogouie, Hamed

    2013-11-01

    In this paper, a new method is proposed to optimize a multi-response optimization problem based on the Taguchi method for the processes where controllable factors are the smaller-the-better (STB)-type variables and the analyzer desires to find an optimal solution with smaller amount of controllable factors. In such processes, the overall output quality of the product should be maximized while the usage of the process inputs, the controllable factors, should be minimized. Since all possible combinations of factors' levels, are not considered in the Taguchi method, the response values of the possible unpracticed treatments are estimated using the artificial neural network (ANN). The neural network is tuned by the central composite design (CCD) and the genetic algorithm (GA). Then data envelopment analysis (DEA) is applied for determining the efficiency of each treatment. Although the important issue for implementation of DEA is its philosophy, which is maximization of outputs versus minimization of inputs, this important issue has been neglected in previous similar studies in multi-response problems. Finally, the most efficient treatment is determined using the maximin weight model approach. The performance of the proposed method is verified in a plastic molding process. Moreover a sensitivity analysis has been done by an efficiency estimator neural network. The results show efficiency of the proposed approach.

  15. A two-phase copula entropy-based multiobjective optimization approach to hydrometeorological gauge network design

    NASA Astrophysics Data System (ADS)

    Xu, Pengcheng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi; Liu, Jiufu; Zou, Ying; He, Ruimin

    2017-12-01

    Hydrometeorological data are needed for obtaining point and areal mean, quantifying the spatial variability of hydrometeorological variables, and calibration and verification of hydrometeorological models. Hydrometeorological networks are utilized to collect such data. Since data collection is expensive, it is essential to design an optimal network based on the minimal number of hydrometeorological stations in order to reduce costs. This study proposes a two-phase copula entropy- based multiobjective optimization approach that includes: (1) copula entropy-based directional information transfer (CDIT) for clustering the potential hydrometeorological gauges into several groups, and (2) multiobjective method for selecting the optimal combination of gauges for regionalized groups. Although entropy theory has been employed for network design before, the joint histogram method used for mutual information estimation has several limitations. The copula entropy-based mutual information (MI) estimation method is shown to be more effective for quantifying the uncertainty of redundant information than the joint histogram (JH) method. The effectiveness of this approach is verified by applying to one type of hydrometeorological gauge network, with the use of three model evaluation measures, including Nash-Sutcliffe Coefficient (NSC), arithmetic mean of the negative copula entropy (MNCE), and MNCE/NSC. Results indicate that the two-phase copula entropy-based multiobjective technique is capable of evaluating the performance of regional hydrometeorological networks and can enable decision makers to develop strategies for water resources management.

  16. GRACE Hydrological estimates for small basins: Evaluating processing approaches on the High Plains Aquifer, USA

    NASA Astrophysics Data System (ADS)

    Longuevergne, Laurent; Scanlon, Bridget R.; Wilson, Clark R.

    2010-11-01

    The Gravity Recovery and Climate Experiment (GRACE) satellites provide observations of water storage variation at regional scales. However, when focusing on a region of interest, limited spatial resolution and noise contamination can cause estimation bias and spatial leakage, problems that are exacerbated as the region of interest approaches the GRACE resolution limit of a few hundred km. Reliable estimates of water storage variations in small basins require compromises between competing needs for noise suppression and spatial resolution. The objective of this study was to quantitatively investigate processing methods and their impacts on bias, leakage, GRACE noise reduction, and estimated total error, allowing solution of the trade-offs. Among the methods tested is a recently developed concentration algorithm called spatiospectral localization, which optimizes the basin shape description, taking into account limited spatial resolution. This method is particularly suited to retrieval of basin-scale water storage variations and is effective for small basins. To increase confidence in derived methods, water storage variations were calculated for both CSR (Center for Space Research) and GRGS (Groupe de Recherche de Géodésie Spatiale) GRACE products, which employ different processing strategies. The processing techniques were tested on the intensively monitored High Plains Aquifer (450,000 km2 area), where application of the appropriate optimal processing method allowed retrieval of water storage variations over a portion of the aquifer as small as ˜200,000 km2.

  17. Modelling Schumann resonances from ELF measurements using non-linear optimization methods

    NASA Astrophysics Data System (ADS)

    Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo

    2017-04-01

    Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.

  18. Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system

    NASA Astrophysics Data System (ADS)

    Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye

    2017-12-01

    In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.

  19. Estimation of the sugar cane cultivated area from LANDSAT images using the two phase sampling method

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Cappelletti, C. A.; Mendonca, F. J.; Lee, D. C. L.; Shimabukuro, Y. E.

    1982-01-01

    A two phase sampling method and the optimal sampling segment dimensions for the estimation of sugar cane cultivated area were developed. This technique employs visual interpretations of LANDSAT images and panchromatic aerial photographs considered as the ground truth. The estimates, as a mean value of 100 simulated samples, represent 99.3% of the true value with a CV of approximately 1%; the relative efficiency of the two phase design was 157% when compared with a one phase aerial photographs sample.

  20. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    NASA Astrophysics Data System (ADS)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  1. The Flight Optimization System Weights Estimation Method

    NASA Technical Reports Server (NTRS)

    Wells, Douglas P.; Horvath, Bryce L.; McCullers, Linwood A.

    2017-01-01

    FLOPS has been the primary aircraft synthesis software used by the Aeronautics Systems Analysis Branch at NASA Langley Research Center. It was created for rapid conceptual aircraft design and advanced technology impact assessments. FLOPS is a single computer program that includes weights estimation, aerodynamics estimation, engine cycle analysis, propulsion data scaling and interpolation, detailed mission performance analysis, takeoff and landing performance analysis, noise footprint estimation, and cost analysis. It is well known as a baseline and common denominator for aircraft design studies. FLOPS is capable of calibrating a model to known aircraft data, making it useful for new aircraft and modifications to existing aircraft. The weight estimation method in FLOPS is known to be of high fidelity for conventional tube with wing aircraft and a substantial amount of effort went into its development. This report serves as a comprehensive documentation of the FLOPS weight estimation method. The development process is presented with the weight estimation process.

  2. Simultaneous Helmert transformations among multiple frames considering all relevant measurements

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Lin, Peng; Bian, Hefang; Gao, Jingxiang

    2018-03-01

    Helmert or similarity models are widely employed to relate different coordinate frames. It is often encountered in practice to transform coordinates from more than one old frame into a new one. One may perform separate Helmert transformations for each old frame. However, although each transformation is locally optimal, this is not globally optimal. Transformations among three frames, namely one new and two old, are studied as an example. Simultaneous Helmert transformations among all frames are also studied. Least-squares estimation of the transformation parameters and the coordinates in the new frame of all stations involved is performed. A functional model for the transformations among multiple frames is developed. A realistic stochastic model is followed, in which not only non-common stations are taken into consideration, but also errors in all measurements are addressed. An algorithm of iterative linearizations and estimations is derived in detail. The proposed method is globally optimal, and, perhaps more importantly, it produces a unified network of the new frame providing coordinate estimates for all involved stations and the associated covariance matrix, with the latter being consistent with the true errors of the former. Simulations are conducted, and the results validate the superiority of the proposed combined method over separate approaches.

  3. Nonlinear system identification for prostate cancer and optimality of intermittent androgen suppression therapy.

    PubMed

    Suzuki, Taiji; Aihara, Kazuyuki

    2013-09-01

    These days prostate cancer is one of the most common types of malignant neoplasm in men. Androgen ablation therapy (hormone therapy) has been shown to be effective for advanced prostate cancer. However, continuous hormone therapy often causes recurrence. This results from the progression of androgen-dependent cancer cells to androgen-independent cancer cells during the continuous hormone therapy. One possible method to prevent the progression to the androgen-independent state is intermittent androgen suppression (IAS) therapy, which ceases dosing intermittently. In this paper, we propose two methods to estimate the dynamics of prostate cancer, and investigate the IAS therapy from the viewpoint of optimality. The two methods that we propose for dynamics estimation are a variational Bayesian method for a piecewise affine (PWA) system and a Gaussian process regression method. We apply the proposed methods to real clinical data and compare their predictive performances. Then, using the estimated dynamics of prostate cancer, we observe how prostate cancer behaves for various dosing schedules. It can be seen that the conventional IAS therapy is a way of imposing high cost for dosing while keeping the prostate cancer in a safe state. We would like to dedicate this paper to the memory of Professor Luigi M. Ricciardi. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Optimal control of a variable spin speed CMG system for space vehicles. [Control Moment Gyros

    NASA Technical Reports Server (NTRS)

    Liu, T. C.; Chubb, W. B.; Seltzer, S. M.; Thompson, Z.

    1973-01-01

    Many future NASA programs require very high accurate pointing stability. These pointing requirements are well beyond anything attempted to date. This paper suggests a control system which has the capability of meeting these requirements. An optimal control law for the suggested system is specified. However, since no direct method of solution is known for this complicated system, a computation technique using successive approximations is used to develop the required solution. The method of calculus of variations is applied for estimating the changes of index of performance as well as those constraints of inequality of state variables and terminal conditions. Thus, an algorithm is obtained by the steepest descent method and/or conjugate gradient method. Numerical examples are given to show the optimal controls.

  5. Crack identification method in beam-like structures using changes in experimentally measured frequencies and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Khatir, Samir; Dekemele, Kevin; Loccufier, Mia; Khatir, Tawfiq; Abdel Wahab, Magd

    2018-02-01

    In this paper, a technique is presented for the detection and localization of an open crack in beam-like structures using experimentally measured natural frequencies and the Particle Swarm Optimization (PSO) method. The technique considers the variation in local flexibility near the crack. The natural frequencies of a cracked beam are determined experimentally and numerically using the Finite Element Method (FEM). The optimization algorithm is programmed in MATLAB. The algorithm is used to estimate the location and severity of a crack by minimizing the differences between measured and calculated frequencies. The method is verified using experimentally measured data on a cantilever steel beam. The Fourier transform is adopted to improve the frequency resolution. The results demonstrate the good accuracy of the proposed technique.

  6. Computational Biomathematics: Toward Optimal Control of Complex Biological Systems

    DTIC Science & Technology

    2016-09-26

    The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to...equations seems daunting. However, we are currently working on parameter estimation methods that show some promise. In this approach, we generate data from

  7. Evaluation of optimized b-value sampling schemas for diffusion kurtosis imaging with an application to stroke patient data

    PubMed Central

    Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Yin, Dazhi; Fan, Mingxia; Yang, Guang; Zhou, Yongdi; Song, Fan; Xu, Dongrong

    2013-01-01

    Diffusion kurtosis imaging (DKI) is a new method of magnetic resonance imaging (MRI) that provides non-Gaussian information that is not available in conventional diffusion tensor imaging (DTI). DKI requires data acquisition at multiple b-values for parameter estimation; this process is usually time-consuming. Therefore, fewer b-values are preferable to expedite acquisition. In this study, we carefully evaluated various acquisition schemas using different numbers and combinations of b-values. Acquisition schemas that sampled b-values that were distributed to two ends were optimized. Compared to conventional schemas using equally spaced b-values (ESB), optimized schemas require fewer b-values to minimize fitting errors in parameter estimation and may thus significantly reduce scanning time. Following a ranked list of optimized schemas resulted from the evaluation, we recommend the 3b schema based on its estimation accuracy and time efficiency, which needs data from only 3 b-values at 0, around 800 and around 2600 s/mm2, respectively. Analyses using voxel-based analysis (VBA) and region-of-interest (ROI) analysis with human DKI datasets support the use of the optimized 3b (0, 1000, 2500 s/mm2) DKI schema in practical clinical applications. PMID:23735303

  8. Optimal word sizes for dissimilarity measures and estimation of the degree of dissimilarity between DNA sequences.

    PubMed

    Wu, Tiee-Jian; Huang, Ying-Hsueh; Li, Lung-An

    2005-11-15

    Several measures of DNA sequence dissimilarity have been developed. The purpose of this paper is 3-fold. Firstly, we compare the performance of several word-based or alignment-based methods. Secondly, we give a general guideline for choosing the window size and determining the optimal word sizes for several word-based measures at different window sizes. Thirdly, we use a large-scale simulation method to simulate data from the distribution of SK-LD (symmetric Kullback-Leibler discrepancy). These simulated data can be used to estimate the degree of dissimilarity beta between any pair of DNA sequences. Our study shows (1) for whole sequence similiarity/dissimilarity identification the window size taken should be as large as possible, but probably not >3000, as restricted by CPU time in practice, (2) for each measure the optimal word size increases with window size, (3) when the optimal word size is used, SK-LD performance is superior in both simulation and real data analysis, (4) the estimate beta of beta based on SK-LD can be used to filter out quickly a large number of dissimilar sequences and speed alignment-based database search for similar sequences and (5) beta is also applicable in local similarity comparison situations. For example, it can help in selecting oligo probes with high specificity and, therefore, has potential in probe design for microarrays. The algorithm SK-LD, estimate beta and simulation software are implemented in MATLAB code, and are available at http://www.stat.ncku.edu.tw/tjwu

  9. Quantum State Tomography via Linear Regression Estimation

    PubMed Central

    Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan

    2013-01-01

    A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher

    Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in ourmore » method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.« less

  11. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  12. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  13. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE PAGES

    Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...

    2018-01-31

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  14. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    PubMed Central

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  15. An optimized Nash nonlinear grey Bernoulli model based on particle swarm optimization and its application in prediction for the incidence of Hepatitis B in Xinjiang, China.

    PubMed

    Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2014-06-01

    In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.

  16. Plant Disease Severity Assessment-How Rater Bias, Assessment Method, and Experimental Design Affect Hypothesis Testing and Resource Use Efficiency.

    PubMed

    Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe

    2016-12-01

    The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate estimates, and using a balanced experimental design are important criteria to consider to maximize the power of hypothesis tests for comparing treatments using disease severity estimates.

  17. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring.

    PubMed

    Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi

    2017-01-18

    Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  18. Estimation of nitrate and nitrogen forms of vegetables by UV-spectrophotometry after photo-oxydation.

    PubMed

    Ribeiro, T; Depres, S; Couteau, G; Pauss, A

    2003-01-01

    An alternative method for the estimation of nitrate and nitrogen forms in vegetables is proposed. Nitrate can be directly estimated by UV-spectrophotometry after an extraction step with water. The other nitrogen compounds are photo-oxidized into nitrate, and then estimated by UV-spectrophotometry. An oxidative solution of sodium persulfate and a Hg-UV lamp is used. Preliminary assays were realized with vegetables like salade, spinachs, artichokes, small peas, broccolis, carrots, watercress; acceptable correlations between expected and experimental values of nitrate amounts were obtained, while the detection limit needs to be lowered. The optimization of the method is underway.

  19. Three-dimensional electrical impedance tomography: a topology optimization approach.

    PubMed

    Mello, Luís Augusto Motta; de Lima, Cícero Ribeiro; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez; Silva, Emílio Carlos Nelli

    2008-02-01

    Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.

  20. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  1. Ground-based remote sensing of HDO/H2O ratio profiles: introduction and validation of an innovative retrieval approach

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Hase, F.; Blumenstock, T.

    2006-10-01

    We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.

  2. Ground-based remote sensing of HDO/H2O ratio profiles: introduction and validation of an innovative retrieval approach

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Hase, F.; Blumenstock, T.

    2006-06-01

    We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.

  3. Estimating soil hydraulic parameters from transient flow experiments in a centrifuge using parameter optimization technique

    USGS Publications Warehouse

    Šimůnek, Jirka; Nimmo, John R.

    2005-01-01

    A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time‐variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field.

  4. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    PubMed Central

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  5. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    PubMed

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  6. Number of discernible object colors is a conundrum.

    PubMed

    Masaoka, Kenichiro; Berns, Roy S; Fairchild, Mark D; Moghareh Abed, Farhad

    2013-02-01

    Widely varying estimates of the number of discernible object colors have been made by using various methods over the past 100 years. To clarify the source of the discrepancies in the previous, inconsistent estimates, the number of discernible object colors is estimated over a wide range of color temperatures and illuminance levels using several chromatic adaptation models, color spaces, and color difference limens. Efficient and accurate models are used to compute optimal-color solids and count the number of discernible colors. A comprehensive simulation reveals limitations in the ability of current color appearance models to estimate the number of discernible colors even if the color solid is smaller than the optimal-color solid. The estimates depend on the color appearance model, color space, and color difference limen used. The fundamental problem lies in the von Kries-type chromatic adaptation transforms, which have an unknown effect on the ranking of the number of discernible colors at different color temperatures.

  7. A mixed methods approach to assess animal vaccination programmes: The case of rabies control in Bamako, Mali.

    PubMed

    Mosimann, Laura; Traoré, Abdallah; Mauti, Stephanie; Léchenne, Monique; Obrist, Brigit; Véron, René; Hattendorf, Jan; Zinsstag, Jakob

    2017-01-01

    In the framework of the research network on integrated control of zoonoses in Africa (ICONZ) a dog rabies mass vaccination campaign was carried out in two communes of Bamako (Mali) in September 2014. A mixed method approach, combining quantitative and qualitative tools, was developed to evaluate the effectiveness of the intervention towards optimization for future scale-up. Actions to control rabies occur on one level in households when individuals take the decision to vaccinate their dogs. However, control also depends on provision of vaccination services and community participation at the intermediate level of social resilience. Mixed methods seem necessary as the problem-driven transdisciplinary project includes epidemiological components in addition to social dynamics and cultural, political and institutional issues. Adapting earlier effectiveness models for health intervention to rabies control, we propose a mixed method assessment of individual effectiveness parameters like availability, affordability, accessibility, adequacy or acceptability. Triangulation of quantitative methods (household survey, empirical coverage estimation and spatial analysis) with qualitative findings (participant observation, focus group discussions) facilitate a better understanding of the weight of each effectiveness determinant, and the underlying reasons embedded in the local understandings, cultural practices, and social and political realities of the setting. Using this method, a final effectiveness of 33% for commune Five and 28% for commune Six was estimated, with vaccination coverage of 27% and 20%, respectively. Availability was identified as the most sensitive effectiveness parameter, attributed to lack of information about the campaign. We propose a mixed methods approach to optimize intervention design, using an "intervention effectiveness optimization cycle" with the aim of maximizing effectiveness. Empirical vaccination coverage estimation is compared to the effectiveness model with its determinants. In addition, qualitative data provide an explanatory framework for deeper insight, validation and interpretation of results which should improve the intervention design while involving all stakeholders and increasing community participation. This work contributes vital information for the optimization and scale-up of future vaccination campaigns in Bamako, Mali. The proposed mixed method, although incompletely applied in this case study, should be applicable to similar rabies interventions targeting elimination in other settings. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  9. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  10. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  11. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.

  12. A deterministic method for estimating free energy genetic network landscapes with applications to cell commitment and reprogramming paths.

    PubMed

    Olariu, Victor; Manesso, Erica; Peterson, Carsten

    2017-06-01

    Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis-Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming.

  13. A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences

    PubMed Central

    Zhu, Youding; Fujimura, Kikuo

    2010-01-01

    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933

  14. Pseudorange Measurement Method Based on AIS Signals.

    PubMed

    Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng

    2017-05-22

    In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system.

  15. Pseudorange Measurement Method Based on AIS Signals

    PubMed Central

    Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng

    2017-01-01

    In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system. PMID:28531153

  16. A deterministic method for estimating free energy genetic network landscapes with applications to cell commitment and reprogramming paths

    PubMed Central

    Olariu, Victor; Manesso, Erica

    2017-01-01

    Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis–Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming. PMID:28680655

  17. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  18. A comprehensive method for preliminary design optimization of axial gas turbine stages

    NASA Technical Reports Server (NTRS)

    Jenkins, R. M.

    1982-01-01

    A method is presented that performs a rapid, reasonably accurate preliminary pitchline optimization of axial gas turbine annular flowpath geometry, as well as an initial estimate of blade profile shapes, given only a minimum of thermodynamic cycle requirements. No geometric parameters need be specified. The following preliminary design data are determined: (1) the optimum flowpath geometry, within mechanical stress limits; (2) initial estimates of cascade blade shapes; (3) predictions of expected turbine performance. The method uses an inverse calculation technique whereby blade profiles are generated by designing channels to yield a specified velocity distribution on the two walls. Velocity distributions are then used to calculate the cascade loss parameters. Calculated blade shapes are used primarily to determine whether the assumed velocity loadings are physically realistic. Model verification is accomplished by comparison of predicted turbine geometry and performance with four existing single stage turbines.

  19. Development of Numerical Methods to Estimate the Ohmic Breakdown Scenarios of a Tokamak

    NASA Astrophysics Data System (ADS)

    Yoo, Min-Gu; Kim, Jayhyun; An, Younghwa; Hwang, Yong-Seok; Shim, Seung Bo; Lee, Hae June; Na, Yong-Su

    2011-10-01

    The ohmic breakdown is a fundamental method to initiate the plasma in a tokamak. For the robust breakdown, ohmic breakdown scenarios have to be carefully designed by optimizing the magnetic field configurations to minimize the stray magnetic fields. This research focuses on development of numerical methods to estimate the ohmic breakdown scenarios by precise analysis of the magnetic field configurations. This is essential for the robust and optimal breakdown and start-up of fusion devices especially for ITER and its beyond equipped with low toroidal electric field (ET <= 0.3 V/m). A field-line-following analysis code based on the Townsend avalanche theory and a particle simulation code are developed to analyze the breakdown characteristics of actual complex magnetic field configurations including the stray magnetic fields in tokamaks. They are applied to the ohmic breakdown scenarios of tokamaks such as KSTAR and VEST and compared with experiments.

  20. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.

    PubMed

    Chung, SungWon; Lu, Ying; Henry, Roland G

    2006-11-01

    Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.

  1. School Cost Functions: A Meta-Regression Analysis

    ERIC Educational Resources Information Center

    Colegrave, Andrew D.; Giles, Margaret J.

    2008-01-01

    The education cost literature includes econometric studies attempting to determine economies of scale, or estimate an optimal school or district size. Not only do their results differ, but the studies use dissimilar data, techniques, and models. To derive value from these studies requires that the estimates be made comparable. One method to do…

  2. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    PubMed

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  3. Fast myopic 2D-SIM super resolution microscopy with joint modulation pattern estimation

    NASA Astrophysics Data System (ADS)

    Orieux, François; Loriette, Vincent; Olivo-Marin, Jean-Christophe; Sepulveda, Eduardo; Fragola, Alexandra

    2017-12-01

    Super-resolution in structured illumination microscopy (SIM) is obtained through de-aliasing of modulated raw images, in which high frequencies are measured indirectly inside the optical transfer function. Usual approaches that use 9 or 15 images are often too slow for dynamic studies. Moreover, as experimental conditions change with time, modulation parameters must be estimated within the images. This paper tackles the problem of image reconstruction for fast super resolution in SIM, where the number of available raw images is reduced to four instead of nine or fifteen. Within an optimization framework, the solution is inferred via a joint myopic criterion for image and modulation (or acquisition) parameters, leading to what is frequently called a myopic or semi-blind inversion problem. The estimate is chosen as the minimizer of the nonlinear criterion, numerically calculated by means of a block coordinate optimization algorithm. The effectiveness of the proposed method is demonstrated for simulated and experimental examples. The results show precise estimation of the modulation parameters jointly with the reconstruction of the super resolution image. The method also shows its effectiveness for thick biological samples.

  4. More Zernike modes' open-loop measurement in the sub-aperture of the Shack-Hartmann wavefront sensor.

    PubMed

    Zhu, Zhaoyi; Mu, Quanquan; Li, Dayu; Yang, Chengliang; Cao, Zhaoliang; Hu, Lifa; Xuan, Li

    2016-10-17

    The centroid-based Shack-Hartmann wavefront sensor (SHWFS) treats the sampled wavefronts in the sub-apertures as planes, and the slopes of the sub-wavefronts are used to reconstruct the whole pupil wavefront. The problem is that the centroid method may fail to sense the high-order modes for strong turbulences, decreasing the precision of the whole pupil wavefront reconstruction. To solve this problem, we propose a sub-wavefront estimation method for SHWFS based on the focal plane sensing technique, by which more Zernike modes than the two slopes can be sensed in each sub-aperture. In this paper, the effects on the sub-wavefront estimation method of the related parameters, such as the spot size, the phase offset with its set amplitude and the pixels number in each sub-aperture, are analyzed and these parameters are optimized to achieve high efficiency. After the optimization, open-loop measurement is realized. For the sub-wavefront sensing, we achieve a large linearity range of 3.0 rad RMS for Zernike modes Z2 and Z3, and 2.0 rad RMS for Zernike modes Z4 to Z6 when the pixel number does not exceed 8 × 8 in each sub-aperture. The whole pupil wavefront reconstruction with the modified SHWFS is realized to analyze the improvements brought by the optimized sub-wavefront estimation method. Sixty-five Zernike modes can be reconstructed with a modified SHWFS containing only 7 × 7 sub-apertures, which could reconstruct only 35 modes by the centroid method, and the mean RMS errors of the residual phases are less than 0.2 rad2, which is lower than the 0.35 rad2 by the centroid method.

  5. Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.

    PubMed

    Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni

    2018-06-15

    Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.

  6. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  7. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  8. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    NASA Astrophysics Data System (ADS)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  9. A dynamic programming approach to estimate the capacity value of energy storage

    DOE PAGES

    Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul

    2013-09-17

    Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less

  10. Optimization of multi-stage dynamic treatment regimes utilizing accumulated data.

    PubMed

    Huang, Xuelin; Choi, Sangbum; Wang, Lu; Thall, Peter F

    2015-11-20

    In medical therapies involving multiple stages, a physician's choice of a subject's treatment at each stage depends on the subject's history of previous treatments and outcomes. The sequence of decisions is known as a dynamic treatment regime or treatment policy. We consider dynamic treatment regimes in settings where each subject's final outcome can be defined as the sum of longitudinally observed values, each corresponding to a stage of the regime. Q-learning, which is a backward induction method, is used to first optimize the last stage treatment then sequentially optimize each previous stage treatment until the first stage treatment is optimized. During this process, model-based expectations of outcomes of late stages are used in the optimization of earlier stages. When the outcome models are misspecified, bias can accumulate from stage to stage and become severe, especially when the number of treatment stages is large. We demonstrate that a modification of standard Q-learning can help reduce the accumulated bias. We provide a computational algorithm, estimators, and closed-form variance formulas. Simulation studies show that the modified Q-learning method has a higher probability of identifying the optimal treatment regime even in settings with misspecified models for outcomes. It is applied to identify optimal treatment regimes in a study for advanced prostate cancer and to estimate and compare the final mean rewards of all the possible discrete two-stage treatment sequences. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Possibility-based robust design optimization for the structural-acoustic system with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2018-03-01

    The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.

  12. Unbiased contaminant removal for 3D galaxy power spectrum measurements

    NASA Astrophysics Data System (ADS)

    Kalus, B.; Percival, W. J.; Bacon, D. J.; Samushia, L.

    2016-11-01

    We assess and develop techniques to remove contaminants when calculating the 3D galaxy power spectrum. We separate the process into three separate stages: (I) removing the contaminant signal, (II) estimating the uncontaminated cosmological power spectrum and (III) debiasing the resulting estimates. For (I), we show that removing the best-fitting contaminant (mode subtraction) and setting the contaminated components of the covariance to be infinite (mode deprojection) are mathematically equivalent. For (II), performing a quadratic maximum likelihood (QML) estimate after mode deprojection gives an optimal unbiased solution, although it requires the manipulation of large N_mode^2 matrices (Nmode being the total number of modes), which is unfeasible for recent 3D galaxy surveys. Measuring a binned average of the modes for (II) as proposed by Feldman, Kaiser & Peacock (FKP) is faster and simpler, but is sub-optimal and gives rise to a biased solution. We present a method to debias the resulting FKP measurements that does not require any large matrix calculations. We argue that the sub-optimality of the FKP estimator compared with the QML estimator, caused by contaminants, is less severe than that commonly ignored due to the survey window.

  13. R programming for parameters estimation of geographically weighted ordinal logistic regression (GWOLR) model based on Newton Raphson

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Saputro, Dewi Retno Sari

    2017-03-01

    GWOLR model used for represent relationship between dependent variable has categories and scale of category is ordinal with independent variable influenced the geographical location of the observation site. Parameters estimation of GWOLR model use maximum likelihood provide system of nonlinear equations and hard to be found the result in analytic resolution. By finishing it, it means determine the maximum completion, this thing associated with optimizing problem. The completion nonlinear system of equations optimize use numerical approximation, which one is Newton Raphson method. The purpose of this research is to make iteration algorithm Newton Raphson and program using R software to estimate GWOLR model. Based on the research obtained that program in R can be used to estimate the parameters of GWOLR model by forming a syntax program with command "while".

  14. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  15. Estimating genome-wide regulatory activity from multi-omics data sets using mathematical optimization.

    PubMed

    Trescher, Saskia; Münchmeyer, Jannes; Leser, Ulf

    2017-03-27

    Gene regulation is one of the most important cellular processes, indispensable for the adaptability of organisms and closely interlinked with several classes of pathogenesis and their progression. Elucidation of regulatory mechanisms can be approached by a multitude of experimental methods, yet integration of the resulting heterogeneous, large, and noisy data sets into comprehensive and tissue or disease-specific cellular models requires rigorous computational methods. Recently, several algorithms have been proposed which model genome-wide gene regulation as sets of (linear) equations over the activity and relationships of transcription factors, genes and other factors. Subsequent optimization finds those parameters that minimize the divergence of predicted and measured expression intensities. In various settings, these methods produced promising results in terms of estimating transcription factor activity and identifying key biomarkers for specific phenotypes. However, despite their common root in mathematical optimization, they vastly differ in the types of experimental data being integrated, the background knowledge necessary for their application, the granularity of their regulatory model, the concrete paradigm used for solving the optimization problem and the data sets used for evaluation. Here, we review five recent methods of this class in detail and compare them with respect to several key properties. Furthermore, we quantitatively compare the results of four of the presented methods based on publicly available data sets. The results show that all methods seem to find biologically relevant information. However, we also observe that the mutual result overlaps are very low, which contradicts biological intuition. Our aim is to raise further awareness of the power of these methods, yet also to identify common shortcomings and necessary extensions enabling focused research on the critical points.

  16. Statistical aspects of point count sampling

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.

  17. Estimating weak ratiometric signals in imaging data. II. Meta-analysis with multiple, dual-channel datasets.

    PubMed

    Sornborger, Andrew; Broder, Josef; Majumder, Anirban; Srinivasamoorthy, Ganesh; Porter, Erika; Reagin, Sean S; Keith, Charles; Lauderdale, James D

    2008-09-01

    Ratiometric fluorescent indicators are used for making quantitative measurements of a variety of physiological variables. Their utility is often limited by noise. This is the second in a series of papers describing statistical methods for denoising ratiometric data with the aim of obtaining improved quantitative estimates of variables of interest. Here, we outline a statistical optimization method that is designed for the analysis of ratiometric imaging data in which multiple measurements have been taken of systems responding to the same stimulation protocol. This method takes advantage of correlated information across multiple datasets for objectively detecting and estimating ratiometric signals. We demonstrate our method by showing results of its application on multiple, ratiometric calcium imaging experiments.

  18. Quantifying rainfall-derived inflow and infiltration in sanitary sewer systems based on conductivity monitoring

    NASA Astrophysics Data System (ADS)

    Zhang, Mingkai; Liu, Yanchen; Cheng, Xun; Zhu, David Z.; Shi, Hanchang; Yuan, Zhiguo

    2018-03-01

    Quantifying rainfall-derived inflow and infiltration (RDII) in a sanitary sewer is difficult when RDII and overflow occur simultaneously. This study proposes a novel conductivity-based method for estimating RDII. The method separately decomposes rainfall-derived inflow (RDI) and rainfall-induced infiltration (RII) on the basis of conductivity data. Fast Fourier transform was adopted to analyze variations in the flow and water quality during dry weather. Nonlinear curve fitting based on the least squares algorithm was used to optimize parameters in the proposed RDII model. The method was successfully applied to real-life case studies, in which inflow and infiltration were successfully estimated for three typical rainfall events with total rainfall volumes of 6.25 mm (light), 28.15 mm (medium), and 178 mm (heavy). Uncertainties of model parameters were estimated using the generalized likelihood uncertainty estimation (GLUE) method and were found to be acceptable. Compared with traditional flow-based methods, the proposed approach exhibits distinct advantages in estimating RDII and overflow, particularly when the two processes happen simultaneously.

  19. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  20. Variational Quantum Tomography with Incomplete Information by Means of Semidefinite Programs

    NASA Astrophysics Data System (ADS)

    Maciel, Thiago O.; Cesário, André T.; Vianna, Reinaldo O.

    We introduce a new method to reconstruct unknown quantum states out of incomplete and noisy information. The method is a linear convex optimization problem, therefore with a unique minimum, which can be efficiently solved with Semidefinite Programs. Numerical simulations indicate that the estimated state does not overestimate purity, and neither the expectation value of optimal entanglement witnesses. The convergence properties of the method are similar to compressed sensing approaches, in the sense that, in order to reconstruct low rank states, it needs just a fraction of the effort corresponding to an informationally complete measurement.

  1. Identification of Swallowing Tasks from a Modified Barium Swallow Study That Optimize the Detection of Physiological Impairment

    ERIC Educational Resources Information Center

    Hazelwood, R. Jordan; Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie

    2017-01-01

    Purpose: The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method: This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived…

  2. Estimation of key parameters in adaptive neuron model according to firing patterns based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Chunhua; Wang, Jiang; Yi, Guosheng

    2017-03-01

    Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.

  3. Modeling, simulation, and estimation of optical turbulence

    NASA Astrophysics Data System (ADS)

    Formwalt, Byron Paul

    This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.

  4. Dissipative structure and global existence in critical space for Timoshenko system of memory type

    NASA Astrophysics Data System (ADS)

    Mori, Naofumi

    2018-08-01

    In this paper, we consider the initial value problem for the Timoshenko system with a memory term in one dimensional whole space. In the first place, we consider the linearized system: applying the energy method in the Fourier space, we derive the pointwise estimate of the solution in the Fourier space, which first gives the optimal decay estimate of the solution. Next, we give a characterization of the dissipative structure of the system by using the spectral analysis, which confirms our pointwise estimate is optimal. In the second place, we consider the nonlinear system: we show that the global-in-time existence and uniqueness result could be proved in the minimal regularity assumption in the critical Sobolev space H2. In the proof we don't need any time-weighted norm as recent works; we use just an energy method, which is improved to overcome the difficulties caused by regularity-loss property of Timoshenko system.

  5. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications.

    PubMed

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-08-06

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.

  6. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications

    PubMed Central

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-01-01

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495

  7. Variational Lagrangian data assimilation in open channel networks

    NASA Astrophysics Data System (ADS)

    Wu, Qingfang; Tinka, Andrew; Weekly, Kevin; Beard, Jonathan; Bayen, Alexandre M.

    2015-04-01

    This article presents a data assimilation method in a tidal system, where data from both Lagrangian drifters and Eulerian flow sensors were fused to estimate water velocity. The system is modeled by first-order, hyperbolic partial differential equations subject to periodic forcing. The estimation problem can then be formulated as the minimization of the difference between the observed variables and model outputs, and eventually provide the velocity and water stage of the hydrodynamic system. The governing equations are linearized and discretized using an implicit discretization scheme, resulting in linear equality constraints in the optimization program. Thus, the flow estimation can be formed as an optimization problem and efficiently solved. The effectiveness of the proposed method was substantiated by a large-scale field experiment in the Sacramento-San Joaquin River Delta in California. A fleet of 100 sensors developed at the University of California, Berkeley, were deployed in Walnut Grove, CA, to collect a set of Lagrangian data, a time series of positions as the sensors moved through the water. Measurements were also taken from Eulerian sensors in the region, provided by the United States Geological Survey. It is shown that the proposed method can effectively integrate Lagrangian and Eulerian measurement data, resulting in a suited estimation of the flow variables within the hydraulic system.

  8. The link between judgments of comparative risk and own risk: further evidence.

    PubMed

    Gold, Ron S

    2007-03-01

    Individuals typically believe that they are less likely than the average person to experience negative events, a phenomenon termed "unrealistic optimism". The direct method of assessing unrealistic optimism employs a question of the form, "Compared with the average person, what is the chance that X will occur to you?". However, it has been proposed that responses to such a question (direct-estimates) are based essentially just on estimates that X will occur to the self (self-estimates). If this is so, any factors that affect one of these estimates should also affect the other. This prediction was tested in two experiments. In each, direct- and self-estimates for an unfamiliar health threat - homocysteine-related heart problems - were recorded. It was found that both types of estimate were affected in the same way by varying the stated probability of having unsafe levels of homocysteine (Study 1, N=149) and varying the stated probability that unsafe levels of homocysteine will lead to heart problems (Study 2, N=111). The results are consistent with the proposal that direct-estimates are constructed just from self-estimates.

  9. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  10. Parameter estimation of kinetic models from metabolic profiles: two-phase dynamic decoupling method.

    PubMed

    Jia, Gengjie; Stephanopoulos, Gregory N; Gunawan, Rudiyanto

    2011-07-15

    Time-series measurements of metabolite concentration have become increasingly more common, providing data for building kinetic models of metabolic networks using ordinary differential equations (ODEs). In practice, however, such time-course data are usually incomplete and noisy, and the estimation of kinetic parameters from these data is challenging. Practical limitations due to data and computational aspects, such as solving stiff ODEs and finding global optimal solution to the estimation problem, give motivations to develop a new estimation procedure that can circumvent some of these constraints. In this work, an incremental and iterative parameter estimation method is proposed that combines and iterates between two estimation phases. One phase involves a decoupling method, in which a subset of model parameters that are associated with measured metabolites, are estimated using the minimization of slope errors. Another phase follows, in which the ODE model is solved one equation at a time and the remaining model parameters are obtained by minimizing concentration errors. The performance of this two-phase method was tested on a generic branched metabolic pathway and the glycolytic pathway of Lactococcus lactis. The results showed that the method is efficient in getting accurate parameter estimates, even when some information is missing.

  11. Quadratic semiparametric Von Mises calculus

    PubMed Central

    Robins, James; Li, Lingling; Tchetgen, Eric

    2009-01-01

    We discuss a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on U-statistics constructed from quadratic influence functions. The latter extend ordinary linear influence functions of the parameter of interest as defined in semiparametric theory, and represent second order derivatives of this parameter. For parameters for which the matching cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than n–1/2-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at n–1/2-rate. PMID:23087487

  12. Dynamic Modeling from Flight Data with Unknown Time Skews

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2016-01-01

    A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.

  13. Evaluation of subset matching methods and forms of covariate balance.

    PubMed

    de Los Angeles Resa, María; Zubizarreta, José R

    2016-11-30

    This paper conducts a Monte Carlo simulation study to evaluate the performance of multivariate matching methods that select a subset of treatment and control observations. The matching methods studied are the widely used nearest neighbor matching with propensity score calipers and the more recently proposed methods, optimal matching of an optimally chosen subset and optimal cardinality matching. The main findings are: (i) covariate balance, as measured by differences in means, variance ratios, Kolmogorov-Smirnov distances, and cross-match test statistics, is better with cardinality matching because by construction it satisfies balance requirements; (ii) for given levels of covariate balance, the matched samples are larger with cardinality matching than with the other methods; (iii) in terms of covariate distances, optimal subset matching performs best; (iv) treatment effect estimates from cardinality matching have lower root-mean-square errors, provided strong requirements for balance, specifically, fine balance, or strength-k balance, plus close mean balance. In standard practice, a matched sample is considered to be balanced if the absolute differences in means of the covariates across treatment groups are smaller than 0.1 standard deviations. However, the simulation results suggest that stronger forms of balance should be pursued in order to remove systematic biases due to observed covariates when a difference in means treatment effect estimator is used. In particular, if the true outcome model is additive, then marginal distributions should be balanced, and if the true outcome model is additive with interactions, then low-dimensional joints should be balanced. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Bayesian segmentation of atrium wall using globally-optimal graph cuts on 3D meshes.

    PubMed

    Veni, Gopalkrishna; Fu, Zhisong; Awate, Suyash P; Whitaker, Ross T

    2013-01-01

    Efficient segmentation of the left atrium (LA) wall from delayed enhancement MRI is challenging due to inconsistent contrast, combined with noise, and high variation in atrial shape and size. We present a surface-detection method that is capable of extracting the atrial wall by computing an optimal a-posteriori estimate. This estimation is done on a set of nested meshes, constructed from an ensemble of segmented training images, and graph cuts on an associated multi-column, proper-ordered graph. The graph/mesh is a part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs which lead to an optimal segmentation. The 3D mesh has an associated weighted, directed multi-column graph with edges that encode smoothness and inter-surface penalties. Unlike previous graph-cut methods that impose hard constraints on the surface properties, the proposed method follows from a Bayesian formulation resulting in soft penalties on spatial variation of the cuts through the mesh. The novelty of this method also lies in the construction of proper-ordered graphs on complex shapes for choosing among distinct classes of base shapes for automatic LA segmentation. We evaluate the proposed segmentation framework on simulated and clinical cardiac MRI.

  15. Infrared Retrievals of Ice Cloud Properties and Uncertainties with an Optimal Estimation Retrieval Method

    NASA Astrophysics Data System (ADS)

    Wang, C.; Platnick, S. E.; Meyer, K.; Zhang, Z.

    2014-12-01

    We developed an optimal estimation (OE)-based method using infrared (IR) observations to retrieve ice cloud optical thickness (COT), cloud effective radius (CER), and cloud top height (CTH) simultaneously. The OE-based retrieval is coupled with a fast IR radiative transfer model (RTM) that simulates observations of different sensors, and corresponding Jacobians in cloudy atmospheres. Ice cloud optical properties are calculated using the MODIS Collection 6 (C6) ice crystal habit (severely roughened hexagonal column aggregates). The OE-based method can be applied to various IR space-borne and airborne sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and the enhanced MODIS Airborne Simulator (eMAS), by optimally selecting IR bands with high information content. Four major error sources (i.e., the measurement error, fast RTM error, model input error, and pre-assumed ice crystal habit error) are taken into account in our OE retrieval method. We show that measurement error and fast RTM error have little impact on cloud retrievals, whereas errors from the model input and pre-assumed ice crystal habit significantly increase retrieval uncertainties when the cloud is optically thin. Comparisons between the OE-retrieved ice cloud properties and other operational cloud products (e.g., the MODIS C6 and CALIOP cloud products) are shown.

  16. Some comments on Anderson and Pospahala's correction of bias in line transect sampling

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Chain, B.R.

    1980-01-01

    ANDERSON and POSPAHALA (1970) investigated the estimation of wildlife population size using the belt or line transect sampling method and devised a correction for bias, thus leading to an estimator with interesting characteristics. This work was given a uniform mathematical framework in BURNHAM and ANDERSON (1976). In this paper we show that the ANDERSON-POSPAHALA estimator is optimal in the sense of being the (unique) best linear unbiased estimator within the class of estimators which are linear combinations of cell frequencies, provided certain assumptions are met.

  17. TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A; Zbijewski, W; Stayman, J

    Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less

  18. Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Isaacson, E.; Ghil, M.

    1981-01-01

    The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.

  19. Time Resolved Temperature Measurement of Hypervelocity Impact Generated Plasma Using a Global Optimization Method

    NASA Astrophysics Data System (ADS)

    Hew, Y. M.; Linscott, I.; Close, S.

    2015-12-01

    Meteoroids and orbital debris, collectively referred to as hypervelocity impactors, travel between 7 and 72 km/s in free space. Upon their impact onto the spacecraft, the energy conversion from kinetic to ionization/vaporization occurs within a very brief timescale and results in a small and dense expanding plasma with a very strong optical flash. The radio frequency (RF) emission produced by this plasma can potentially lead to electrical anomalies within the spacecraft. In addition, space weather, such as solar activity and background plasma, can establish spacecraft conditions which can exaggerate the damages done by these impacts. During the impact, a very strong impact flash will be generated. Through the studying of this emission spectrum of the impact, we hope to study the impact generated gas cloud/plasma properties. The impact flash emitted from a ground-based hypervelocity impact test is long expected by many scientists to contain the characteristics of the impact generated plasma, such as plasma temperature and density. This paper presents a method for the time-resolved plasma temperature estimation using three-color visible band photometry data with a global pattern search optimization method. The equilibrium temperature of the plasma can be estimated using an optical model which accounts for both the line emission and continuum emission from the plasma. Using a global pattern search based optimizer, the model can isolate the contribution of the continuum emission versus the line emission from the plasma. The plasma temperature can thus be estimated. Prior to the optimization step, a Gaussian process is also applied to extract the optical emission signal out of the noisy background. The resultant temperature and line-to-continuum emission weighting factor are consistent with the spectrum of the impactor material and current literature.

  20. Estimating irrigation water demand using an improved method and optimizing reservoir operation for water supply and hydropower generation: a case study of the Xinfengjiang reservoir in southern China

    USGS Publications Warehouse

    Wu, Yiping; Chen, Ji

    2013-01-01

    The ever-increasing demand for water due to growth of population and socioeconomic development in the past several decades has posed a worldwide threat to water supply security and to the environmental health of rivers. This study aims to derive reservoir operating rules through establishing a multi-objective optimization model for the Xinfengjiang (XFJ) reservoir in the East River Basin in southern China to minimize water supply deficit and maximize hydropower generation. Additionally, to enhance the estimation of irrigation water demand from the downstream agricultural area of the XFJ reservoir, a conventional method for calculating crop water demand is improved using hydrological model simulation results. Although the optimal reservoir operating rules are derived for the XFJ reservoir with three priority scenarios (water supply only, hydropower generation only, and equal priority), the river environmental health is set as the basic demand no matter which scenario is adopted. The results show that the new rules derived under the three scenarios can improve the reservoir operation for both water supply and hydropower generation when comparing to the historical performance. Moreover, these alternative reservoir operating policies provide the flexibility for the reservoir authority to choose the most appropriate one. Although changing the current operating rules may influence its hydropower-oriented functions, the new rules can be significant to cope with the increasingly prominent water shortage and degradation in the aquatic environment. Overall, our results and methods (improved estimation of irrigation water demand and formulation of the reservoir optimization model) can be useful for local watershed managers and valuable for other researchers worldwide.

  1. Salient object detection based on discriminative boundary and multiple cues integration

    NASA Astrophysics Data System (ADS)

    Jiang, Qingzhu; Wu, Zemin; Tian, Chang; Liu, Tao; Zeng, Mingyong; Hu, Lei

    2016-01-01

    In recent years, many saliency models have achieved good performance by taking the image boundary as the background prior. However, if all boundaries of an image are equally and artificially selected as background, misjudgment may happen when the object touches the boundary. We propose an algorithm called weighted contrast optimization based on discriminative boundary (wCODB). First, a background estimation model is reliably constructed through discriminating each boundary via Hausdorff distance. Second, the background-only weighted contrast is improved by fore-background weighted contrast, which is optimized through weight-adjustable optimization framework. Then to objectively estimate the quality of a saliency map, a simple but effective metric called spatial distribution of saliency map and mean saliency in covered window ratio (MSR) is designed. Finally, in order to further promote the detection result using MSR as the weight, we propose a saliency fusion framework to integrate three other cues-uniqueness, distribution, and coherence from three representative methods into our wCODB model. Extensive experiments on six public datasets demonstrate that our wCODB performs favorably against most of the methods based on boundary, and the integrated result outperforms all state-of-the-art methods.

  2. Pseudo-time methods for constrained optimization problems governed by PDE

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1995-01-01

    In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.

  3. Determination of low-frequency normal modes and structure coefficients using optimal sequence stacking method and autoregressive method in frequency domain

    NASA Astrophysics Data System (ADS)

    Majstorovic, J.; Rosat, S.; Lambotte, S.; Rogister, Y. J. G.

    2017-12-01

    Although there are numerous studies about 3D density Earth model, building an accurate one is still an engaging challenge. One procedure to refine global 3D Earth density models is based on unambiguous measurements of Earth's normal mode eigenfrequencies. To have unbiased eigenfrequency measurements one needs to deal with a variety of time records quality and especially different noise sources, while standard approaches usually include signal processing methods such as Fourier transform. Here we present estimate of complex eigenfrequencies and structure coefficients for several modes below 1 mHz (0S2, 2S1, etc.). Our analysis is performed in three steps. The first step includes the use of stacking methods to enhance specific modes of interest above the observed noise level. Out of three trials the optimal sequence estimation turned out to be the foremost compared to the spherical harmonic stacking method and receiver strip method. In the second step we apply an autoregressive method in the frequency domain to estimate complex eigenfrequencies of target modes. In the third step we apply the phasor walkout method to test and confirm our eigenfrequencies. Before conducting an analysis of time records, we evaluate how the station distribution and noise levels impact the estimate of eigenfrequencies and structure coefficients by using synthetic seismograms calculated for a 3D realistic Earth model, which includes Earth's ellipticity and lateral heterogeneity. Synthetic seismograms are computed by means of normal mode summation using self-coupling and cross-coupling of modes up to 1 mHz. Eventually, the methods tested on synthetic data are applied to long-period seismometer and superconducting gravimeter data recorded after six mega-earthquakes of magnitude greater than 8.3. Hence, we propose new estimates of structure coefficients dependent on the density variations.

  4. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.

    2013-12-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.

  5. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models

    PubMed Central

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005

  6. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models.

    PubMed

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.

  7. Method for hue plane preserving color correction.

    PubMed

    Mackiewicz, Michal; Andersen, Casper F; Finlayson, Graham

    2016-11-01

    Hue plane preserving color correction (HPPCC), introduced by Andersen and Hardeberg [Proceedings of the 13th Color and Imaging Conference (CIC) (2005), pp. 141-146], maps device-dependent color values (RGB) to colorimetric color values (XYZ) using a set of linear transforms, realized by white point preserving 3×3 matrices, where each transform is learned and applied in a subregion of color space, defined by two adjacent hue planes. The hue plane delimited subregions of camera RGB values are mapped to corresponding hue plane delimited subregions of estimated colorimetric XYZ values. Hue planes are geometrical half-planes, where each is defined by the neutral axis and a chromatic color in a linear color space. The key advantage of the HPPCC method is that, while offering an estimation accuracy of higher order methods, it maintains the linear colorimetric relations of colors in hue planes. As a significant result, it therefore also renders the colorimetric estimates invariant to exposure and shading of object reflection. In this paper, we present a new flexible and robust version of HPPCC using constrained least squares in the optimization, where the subregions can be chosen freely in number and position in order to optimize the results while constraining transform continuity at the subregion boundaries. The method is compared to a selection of other state-of-the-art characterization methods, and the results show that it outperforms the original HPPCC method.

  8. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  10. Data-driven sensor placement from coherent fluid structures

    NASA Astrophysics Data System (ADS)

    Manohar, Krithika; Kaiser, Eurika; Brunton, Bingni W.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Optimal sensor placement is a central challenge in the prediction, estimation and control of fluid flows. We reinterpret sensor placement as optimizing discrete samples of coherent fluid structures for full state reconstruction. This permits a drastic reduction in the number of sensors required for faithful reconstruction, since complex fluid interactions can often be described by a small number of coherent structures. Our work optimizes point sensors using the pivoted matrix QR factorization to sample coherent structures directly computed from flow data. We apply this sampling technique in conjunction with various data-driven modal identification methods, including the proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD). In contrast to POD-based sensors, DMD demonstrably enables the optimization of sensors for prediction in systems exhibiting multiple scales of dynamics. Finally, reconstruction accuracy from pivot sensors is shown to be competitive with sensors obtained using traditional computationally prohibitive optimization methods.

  11. Nonlinear Symplectic Attitude Estimation for Small Satellites

    DTIC Science & Technology

    2006-08-01

    Vol. 45, No. 3, 2000, pp. 477-482. 7 Gelb, A., editor, Applied Optimal Estimation, The M.I.T. Press, Cambridge, MA, 1974. ’ Brown , R. G. and Hwang , P. Y...demonstrate orders of magnitude improvement in state and constants of motion estimation when compared to extended and iterative Kalman methods...satellites have fallen into the former category, including the ubiquitous Extended Kalman Filter (EKF).2 𔄁- 9 While this approach has been used

  12. Multifractal diffusion entropy analysis: Optimal bin width of probability histograms

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Korbel, Jan

    2014-11-01

    In the framework of Multifractal Diffusion Entropy Analysis we propose a method for choosing an optimal bin-width in histograms generated from underlying probability distributions of interest. The method presented uses techniques of Rényi’s entropy and the mean squared error analysis to discuss the conditions under which the error in the multifractal spectrum estimation is minimal. We illustrate the utility of our approach by focusing on a scaling behavior of financial time series. In particular, we analyze the S&P500 stock index as sampled at a daily rate in the time period 1950-2013. In order to demonstrate a strength of the method proposed we compare the multifractal δ-spectrum for various bin-widths and show the robustness of the method, especially for large values of q. For such values, other methods in use, e.g., those based on moment estimation, tend to fail for heavy-tailed data or data with long correlations. Connection between the δ-spectrum and Rényi’s q parameter is also discussed and elucidated on a simple example of multiscale time series.

  13. A computational framework for simultaneous estimation of muscle and joint contact forces and body motion using optimization and surrogate modeling.

    PubMed

    Eskinazi, Ilan; Fregly, Benjamin J

    2018-04-01

    Concurrent estimation of muscle activations, joint contact forces, and joint kinematics by means of gradient-based optimization of musculoskeletal models is hindered by computationally expensive and non-smooth joint contact and muscle wrapping algorithms. We present a framework that simultaneously speeds up computation and removes sources of non-smoothness from muscle force optimizations using a combination of parallelization and surrogate modeling, with special emphasis on a novel method for modeling joint contact as a surrogate model of a static analysis. The approach allows one to efficiently introduce elastic joint contact models within static and dynamic optimizations of human motion. We demonstrate the approach by performing two optimizations, one static and one dynamic, using a pelvis-leg musculoskeletal model undergoing a gait cycle. We observed convergence on the order of seconds for a static optimization time frame and on the order of minutes for an entire dynamic optimization. The presented framework may facilitate model-based efforts to predict how planned surgical or rehabilitation interventions will affect post-treatment joint and muscle function. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Study on optimization method of test conditions for fatigue crack detection using lock-in vibrothermography

    NASA Astrophysics Data System (ADS)

    Min, Qing-xu; Zhu, Jun-zhen; Feng, Fu-zhou; Xu, Chao; Sun, Ji-wei

    2017-06-01

    In this paper, the lock-in vibrothermography (LVT) is utilized for defect detection. Specifically, for a metal plate with an artificial fatigue crack, the temperature rise of the defective area is used for analyzing the influence of different test conditions, i.e. engagement force, excitation intensity, and modulated frequency. The multivariate nonlinear and logistic regression models are employed to estimate the POD (probability of detection) and POA (probability of alarm) of fatigue crack, respectively. The resulting optimal selection of test conditions is presented. The study aims to provide an optimized selection method of the test conditions in the vibrothermography system with the enhanced detection ability.

  15. Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations.

    PubMed

    Ahrari, Ali; Deb, Kalyanmoy; Preuss, Mike

    2017-01-01

    During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.

  16. Multiple-Objective Optimal Designs for Studying the Dose Response Function and Interesting Dose Levels

    PubMed Central

    Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs. PMID:26565557

  17. Multiple-Objective Optimal Designs for Studying the Dose Response Function and Interesting Dose Levels.

    PubMed

    Hyun, Seung Won; Wong, Weng Kee

    2015-11-01

    We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs.

  18. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  19. The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppeliers, Christian

    This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In thismore » report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.« less

  20. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    PubMed

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Loose fusion based on SLAM and IMU for indoor environment

    NASA Astrophysics Data System (ADS)

    Zhu, Haijiang; Wang, Zhicheng; Zhou, Jinglin; Wang, Xuejing

    2018-04-01

    The simultaneous localization and mapping (SLAM) method based on the RGB-D sensor is widely researched in recent years. However, the accuracy of the RGB-D SLAM relies heavily on correspondence feature points, and the position would be lost in case of scenes with sparse textures. Therefore, plenty of fusion methods using the RGB-D information and inertial measurement unit (IMU) data have investigated to improve the accuracy of SLAM system. However, these fusion methods usually do not take into account the size of matched feature points. The pose estimation calculated by RGB-D information may not be accurate while the number of correct matches is too few. Thus, considering the impact of matches in SLAM system and the problem of missing position in scenes with few textures, a loose fusion method combining RGB-D with IMU is proposed in this paper. In the proposed method, we design a loose fusion strategy based on the RGB-D camera information and IMU data, which is to utilize the IMU data for position estimation when the corresponding point matches are quite few. While there are a lot of matches, the RGB-D information is still used to estimate position. The final pose would be optimized by General Graph Optimization (g2o) framework to reduce error. The experimental results show that the proposed method is better than the RGB-D camera's method. And this method can continue working stably for indoor environment with sparse textures in the SLAM system.

  2. Least-squares dual characterization for ROI assessment in emission tomography

    NASA Astrophysics Data System (ADS)

    Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

    2013-06-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

  3. Lipid-anthropometric index optimization for insulin sensitivity estimation

    NASA Astrophysics Data System (ADS)

    Velásquez, J.; Wong, S.; Encalada, L.; Herrera, H.; Severeyn, E.

    2015-12-01

    Insulin sensitivity (IS) is the ability of cells to react due to insulińs presence; when this ability is diminished, low insulin sensitivity or insulin resistance (IR) is considered. IR had been related to other metabolic disorders as metabolic syndrome (MS), obesity, dyslipidemia and diabetes. IS can be determined using direct or indirect methods. The indirect methods are less accurate and invasive than direct and they use glucose and insulin values from oral glucose tolerance test (OGTT). The accuracy is established by comparison using spearman rank correlation coefficient between direct and indirect method. This paper aims to propose a lipid-anthropometric index which offers acceptable correlation to insulin sensitivity index for different populations (DB1=MS subjects, DB2=sedentary without MS subjects and DB3=marathoners subjects) without to use OGTT glucose and insulin values. The proposed method is parametrically optimized through a random cross-validation, using the spearman rank correlation as comparator with CAUMO method. CAUMO is an indirect method designed from a simplification of the minimal model intravenous glucose tolerance test direct method (MINMOD-IGTT) and with acceptable correlation (0.89). The results show that the proposed optimized method got a better correlation with CAUMO in all populations compared to non-optimized. On the other hand, it was observed that the optimized method has better correlation with CAUMO in DB2 and DB3 groups than HOMA-IR method, which is the most widely used for diagnosing insulin resistance. The optimized propose method could detect incipient insulin resistance, when classify as insulin resistant subjects that present impaired postprandial insulin and glucose values.

  4. Simple method for quick estimation of aquifer hydrogeological parameters

    NASA Astrophysics Data System (ADS)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  5. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  6. Optimizing efficiency of height modeling for extensive forest inventories.

    Treesearch

    T.M. Barrett

    2006-01-01

    Although critical to monitoring forest ecosystems, inventories are expensive. This paper presents a generalizable method for using an integer programming model to examine tradeoffs between cost and estimation error for alternative measurement strategies in forest inventories. The method is applied to an example problem of choosing alternative height-modeling strategies...

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data weremore » segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is supported by NIH/NIBIB (1R01-EB016777), National Natural Science Foundation of China (No.61471226 and No.61201441), Research funding from Shandong Province (No.BS2012DX038 and No.J12LN23), and Research funding from Jinan City (No.201401221 and No.20120109)« less

  8. First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity

    NASA Technical Reports Server (NTRS)

    Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.

  9. Sequential estimation and satellite data assimilation in meteorology and oceanography

    NASA Technical Reports Server (NTRS)

    Ghil, M.

    1986-01-01

    The central theme of this review article is the role that dynamics plays in estimating the state of the atmosphere and of the ocean from incomplete and noisy data. Objective analysis and inverse methods represent an attempt at relying mostly on the data and minimizing the role of dynamics in the estimation. Four-dimensional data assimilation tries to balance properly the roles of dynamical and observational information. Sequential estimation is presented as the proper framework for understanding this balance, and the Kalman filter as the ideal, optimal procedure for data assimilation. The optimal filter computes forecast error covariances of a given atmospheric or oceanic model exactly, and hence data assimilation should be closely connected with predictability studies. This connection is described, and consequences drawn for currently active areas of the atmospheric and oceanic sciences, namely, mesoscale meteorology, medium and long-range forecasting, and upper-ocean dynamics.

  10. Particle swarm optimization algorithm based parameters estimation and control of epileptiform spikes in a neural mass model

    NASA Astrophysics Data System (ADS)

    Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan

    2016-07-01

    This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.

  11. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

  12. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    NASA Astrophysics Data System (ADS)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.

  13. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  14. Study on feed forward neural network convex optimization for LiFePO4 battery parameters

    NASA Astrophysics Data System (ADS)

    Liu, Xuepeng; Zhao, Dongmei

    2017-08-01

    Based on the modern facility agriculture automatic walking equipment LiFePO4 Battery, the parameter identification of LiFePO4 Battery is analyzed. An improved method for the process model of li battery is proposed, and the on-line estimation algorithm is presented. The parameters of the battery are identified using feed forward network neural convex optimization algorithm.

  15. Modeling of microporous silicon betaelectric converter with 63Ni plating in GEANT4 toolkit*

    NASA Astrophysics Data System (ADS)

    Zelenkov, P. V.; Sidorov, V. G.; Lelekov, E. T.; Khoroshko, A. Y.; Bogdanov, S. V.; Lelekov, A. T.

    2016-04-01

    The model of electron-hole pairs generation rate distribution in semiconductor is needed to optimize the parameters of microporous silicon betaelectric converter, which uses 63Ni isotope radiation. By using Monte-Carlo methods of GEANT4 software with ultra-low energy electron physics models this distribution in silicon was calculated and approximated with exponential function. Optimal pore configuration was estimated.

  16. Acceleration of high resolution temperature based optimization for hyperthermia treatment planning using element grouping.

    PubMed

    Kok, H P; de Greef, M; Bel, A; Crezee, J

    2009-08-01

    In regional hyperthermia, optimization is useful to obtain adequate applicator settings. A speed-up of the previously published method for high resolution temperature based optimization is proposed. Element grouping as described in literature uses selected voxel sets instead of single voxels to reduce computation time. Elements which achieve their maximum heating potential for approximately the same phase/amplitude setting are grouped. To form groups, eigenvalues and eigenvectors of precomputed temperature matrices are used. At high resolution temperature matrices are unknown and temperatures are estimated using low resolution (1 cm) computations and the high resolution (2 mm) temperature distribution computed for low resolution optimized settings using zooming. This technique can be applied to estimate an upper bound for high resolution eigenvalues. The heating potential of elements was estimated using these upper bounds. Correlations between elements were estimated with low resolution eigenvalues and eigenvectors, since high resolution eigenvectors remain unknown. Four different grouping criteria were applied. Constraints were set to the average group temperatures. Element grouping was applied for five patients and optimal settings for the AMC-8 system were determined. Without element grouping the average computation times for five and ten runs were 7.1 and 14.4 h, respectively. Strict grouping criteria were necessary to prevent an unacceptable exceeding of the normal tissue constraints (up to approximately 2 degrees C), caused by constraining average instead of maximum temperatures. When strict criteria were applied, speed-up factors of 1.8-2.1 and 2.6-3.5 were achieved for five and ten runs, respectively, depending on the grouping criteria. When many runs are performed, the speed-up factor will converge to 4.3-8.5, which is the average reduction factor of the constraints and depends on the grouping criteria. Tumor temperatures were comparable. Maximum exceeding of the constraint in a hot spot was 0.24-0.34 degree C; average maximum exceeding over all five patients was 0.09-0.21 degree C, which is acceptable. High resolution temperature based optimization using element grouping can achieve a speed-up factor of 4-8, without large deviations from the conventional method.

  17. Improving best-phase image quality in cardiac CT by motion correction with MAM optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohkohl, Christopher; Bruder, Herbert; Stierstorfer, Karl

    2013-03-15

    Purpose: Research in image reconstruction for cardiac CT aims at using motion correction algorithms to improve the image quality of the coronary arteries. The key to those algorithms is motion estimation, which is currently based on 3-D/3-D registration to align the structures of interest in images acquired in multiple heart phases. The need for an extended scan data range covering several heart phases is critical in terms of radiation dose to the patient and limits the clinical potential of the method. Furthermore, literature reports only slight quality improvements of the motion corrected images when compared to the most quiet phasemore » (best-phase) that was actually used for motion estimation. In this paper a motion estimation algorithm is proposed which does not require an extended scan range but works with a short scan data interval, and which markedly improves the best-phase image quality. Methods: Motion estimation is based on the definition of motion artifact metrics (MAM) to quantify motion artifacts in a 3-D reconstructed image volume. The authors use two different MAMs, entropy, and positivity. By adjusting the motion field parameters, the MAM of the resulting motion-compensated reconstruction is optimized using a gradient descent procedure. In this way motion artifacts are minimized. For a fast and practical implementation, only analytical methods are used for motion estimation and compensation. Both the MAM-optimization and a 3-D/3-D registration-based motion estimation algorithm were investigated by means of a computer-simulated vessel with a cardiac motion profile. Image quality was evaluated using normalized cross-correlation (NCC) with the ground truth template and root-mean-square deviation (RMSD). Four coronary CT angiography patient cases were reconstructed to evaluate the clinical performance of the proposed method. Results: For the MAM-approach, the best-phase image quality could be improved for all investigated heart phases, with a maximum improvement of the NCC value by 100% and of the RMSD value by 81%. The corresponding maximum improvements for the registration-based approach were 20% and 40%. In phases with very rapid motion the registration-based algorithm obtained better image quality, while the image quality of the MAM algorithm was superior in phases with less motion. The image quality improvement of the MAM optimization was visually confirmed for the different clinical cases. Conclusions: The proposed method allows a software-based best-phase image quality improvement in coronary CT angiography. A short scan data interval at the target heart phase is sufficient, no additional scan data in other cardiac phases are required. The algorithm is therefore directly applicable to any standard cardiac CT acquisition protocol.« less

  18. Weak-value amplification and optimal parameter estimation in the presence of correlated noise

    NASA Astrophysics Data System (ADS)

    Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.

    2017-11-01

    We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.

  19. Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1996-01-01

    In this report the author describes: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of flight path optimization. A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT bas traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight.

  20. A new approach for estimating the Jupiter and Saturn gravity fields using Juno and Cassini measurements, trajectory estimation analysis, and a dynamical wind model optimization

    NASA Astrophysics Data System (ADS)

    Galanti, Eli; Durante, Daniele; Iess, Luciano; Kaspi, Yohai

    2017-04-01

    The ongoing Juno spacecraft measurements are improving our knowledge of Jupiter's gravity field. Similarly, the Cassini Grand Finale will improve the gravity estimate of Saturn. The analysis of the Juno and Cassini Doppler data will provide a very accurate reconstruction of spacial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity fields of Jupiter and Saturn, additional information needs to be incorporated into the analysis, especially with regards to the planets' wind structures. In this work we propose a new iterative approach for the estimation of Jupiter and Saturn gravity fields, using simulated measurements, a trajectory estimation model, and an adjoint based inverse thermal wind model. Beginning with an artificial gravitational field, the trajectory estimation model is used to obtain the gravitational moments. The solution from the trajectory model is then used as an initial guess for the thermal wind model, and together with an optimization method, the likely penetration depth of the winds is computed, and its uncertainty is evaluated. As a final step, the gravity harmonics solution from the thermal wind model is given back to the trajectory model, along with an estimate of their uncertainties, to be used as a priori for a new calculation of the gravity field. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that by using this method some of the gravitational moments are fitted better to the `observed' ones, mainly due to the added information from the dynamical model which includes the wind structure and its depth. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity moments estimated from the Juno and Cassini radio science experiments.

  1. Active and Reactive Power Optimal Dispatch Associated with Load and DG Uncertainties in Active Distribution Network

    NASA Astrophysics Data System (ADS)

    Gao, F.; Song, X. H.; Zhang, Y.; Li, J. F.; Zhao, S. S.; Ma, W. Q.; Jia, Z. Y.

    2017-05-01

    In order to reduce the adverse effects of uncertainty on optimal dispatch in active distribution network, an optimal dispatch model based on chance-constrained programming is proposed in this paper. In this model, the active and reactive power of DG can be dispatched at the aim of reducing the operating cost. The effect of operation strategy on the cost can be reflected in the objective which contains the cost of network loss, DG curtailment, DG reactive power ancillary service, and power quality compensation. At the same time, the probabilistic constraints can reflect the operation risk degree. Then the optimal dispatch model is simplified as a series of single stage model which can avoid large variable dimension and improve the convergence speed. And the single stage model is solved using a combination of particle swarm optimization (PSO) and point estimate method (PEM). Finally, the proposed optimal dispatch model and method is verified by the IEEE33 test system.

  2. Reexamination of optimal quantum state estimation of pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-09-15

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less

  3. Distance measures and optimization spaces in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Rode, Karyn D.; Budge, Suzanne M.; Thiemann, Gregory W.

    2015-01-01

    Quantitative fatty acid signature analysis has become an important method of diet estimation in ecology, especially marine ecology. Controlled feeding trials to validate the method and estimate the calibration coefficients necessary to account for differential metabolism of individual fatty acids have been conducted with several species from diverse taxa. However, research into potential refinements of the estimation method has been limited. We compared the performance of the original method of estimating diet composition with that of five variants based on different combinations of distance measures and calibration-coefficient transformations between prey and predator fatty acid signature spaces. Fatty acid signatures of pseudopredators were constructed using known diet mixtures of two prey data sets previously used to estimate the diets of polar bears Ursus maritimus and gray seals Halichoerus grypus, and their diets were then estimated using all six variants. In addition, previously published diets of Chukchi Sea polar bears were re-estimated using all six methods. Our findings reveal that the selection of an estimation method can meaningfully influence estimates of diet composition. Among the pseudopredator results, which allowed evaluation of bias and precision, differences in estimator performance were rarely large, and no one estimator was universally preferred, although estimators based on the Aitchison distance measure tended to have modestly superior properties compared to estimators based on the Kullback-Leibler distance measure. However, greater differences were observed among estimated polar bear diets, most likely due to differential estimator sensitivity to assumption violations. Our results, particularly the polar bear example, suggest that additional research into estimator performance and model diagnostics is warranted.

  4. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    PubMed

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  5. Improving Efficiency of Passive RFID Tag Anti-Collision Protocol Using Dynamic Frame Adjustment and Optimal Splitting.

    PubMed

    Memon, Muhammad Qasim; He, Jingsha; Yasir, Mirza Ammar; Memon, Aasma

    2018-04-12

    Radio frequency identification is a wireless communication technology, which enables data gathering and identifies recognition from any tagged object. The number of collisions produced during wireless communication would lead to a variety of problems including unwanted number of iterations and reader-induced idle slots, computational complexity in terms of estimation as well as recognition of the number of tags. In this work, dynamic frame adjustment and optimal splitting are employed together in the proposed algorithm. In the dynamic frame adjustment method, the length of frames is based on the quantity of tags to yield optimal efficiency. The optimal splitting method is conceived with smaller duration of idle slots using an optimal value for splitting level M o p t , where (M > 2), to vary slot sizes to get the minimal identification time for the idle slots. The application of the proposed algorithm offers the advantages of not going for the cumbersome estimation of the quantity of tags incurred and the size (number) of tags has no effect on its performance efficiency. Our experiment results show that using the proposed algorithm, the efficiency curve remains constant as the number of tags varies from 50 to 450, resulting in an overall theoretical gain in the efficiency of 0.032 compared to system efficiency of 0.441 and thus outperforming both dynamic binary tree slotted ALOHA (DBTSA) and binary splitting protocols.

  6. Waveform Optimization for Target Estimation by Cognitive Radar with Multiple Antennas.

    PubMed

    Yao, Yu; Zhao, Junhui; Wu, Lenan

    2018-05-29

    A new scheme based on Kalman filtering to optimize the waveforms of an adaptive multi-antenna radar system for target impulse response (TIR) estimation is presented. This work aims to improve the performance of TIR estimation by making use of the temporal correlation between successive received signals, and minimize the mean square error (MSE) of TIR estimation. The waveform design approach is based upon constant learning from the target feature at the receiver. Under the multiple antennas scenario, a dynamic feedback loop control system is established to real-time monitor the change in the target features extracted form received signals. The transmitter adapts its transmitted waveform to suit the time-invariant environment. Finally, the simulation results show that, as compared with the waveform design method based on the MAP criterion, the proposed waveform design algorithm is able to improve the performance of TIR estimation for extended targets with multiple iterations, and has a relatively lower level of complexity.

  7. System and method for bullet tracking and shooter localization

    DOEpatents

    Roberts, Randy S [Livermore, CA; Breitfeller, Eric F [Dublin, CA

    2011-06-21

    A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.

  8. The value of value of information: best informing research design and prioritization using current methods.

    PubMed

    Eckermann, Simon; Karnon, Jon; Willan, Andrew R

    2010-01-01

    Value of information (VOI) methods have been proposed as a systematic approach to inform optimal research design and prioritization. Four related questions arise that VOI methods could address. (i) Is further research for a health technology assessment (HTA) potentially worthwhile? (ii) Is the cost of a given research design less than its expected value? (iii) What is the optimal research design for an HTA? (iv) How can research funding be best prioritized across alternative HTAs? Following Occam's razor, we consider the usefulness of VOI methods in informing questions 1-4 relative to their simplicity of use. Expected value of perfect information (EVPI) with current information, while simple to calculate, is shown to provide neither a necessary nor a sufficient condition to address question 1, given that what EVPI needs to exceed varies with the cost of research design, which can vary from very large down to negligible. Hence, for any given HTA, EVPI does not discriminate, as it can be large and further research not worthwhile or small and further research worthwhile. In contrast, each of questions 1-4 are shown to be fully addressed (necessary and sufficient) where VOI methods are applied to maximize expected value of sample information (EVSI) minus expected costs across designs. In comparing complexity in use of VOI methods, applying the central limit theorem (CLT) simplifies analysis to enable easy estimation of EVSI and optimal overall research design, and has been shown to outperform bootstrapping, particularly with small samples. Consequently, VOI methods applying the CLT to inform optimal overall research design satisfy Occam's razor in both improving decision making and reducing complexity. Furthermore, they enable consideration of relevant decision contexts, including option value and opportunity cost of delay, time, imperfect implementation and optimal design across jurisdictions. More complex VOI methods such as bootstrapping of the expected value of partial EVPI may have potential value in refining overall research design. However, Occam's razor must be seriously considered in application of these VOI methods, given their increased complexity and current limitations in informing decision making, with restriction to EVPI rather than EVSI and not allowing for important decision-making contexts. Initial use of CLT methods to focus these more complex partial VOI methods towards where they may be useful in refining optimal overall trial design is suggested. Integrating CLT methods with such partial VOI methods to allow estimation of partial EVSI is suggested in future research to add value to the current VOI toolkit.

  9. Physical fundamentals of criterial estimation of nitriding technology for parts of friction units

    NASA Astrophysics Data System (ADS)

    Kuksenova, L. I.; Gerasimov, S. A.; Lapteva, V. G.; Alekseeva, M. S.

    2013-03-01

    Characteristics of the structure and properties of surface layers of nitrided structural steels and alloys, which affect the level of surface fracture under friction, are studied. A generalized structural parameter for optimizing the nitriding process and a rapid method for estimating the quality of the surface layer of nitrided parts of friction units are developed.

  10. Estimation of parameter uncertainty for an activated sludge model using Bayesian inference: a comparison with the frequentist method.

    PubMed

    Zonta, Zivko J; Flotats, Xavier; Magrí, Albert

    2014-08-01

    The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.

  11. Estimating neural response functions from fMRI

    PubMed Central

    Kumar, Sukhbinder; Penny, William

    2014-01-01

    This paper proposes a methodology for estimating Neural Response Functions (NRFs) from fMRI data. These NRFs describe non-linear relationships between experimental stimuli and neuronal population responses. The method is based on a two-stage model comprising an NRF and a Hemodynamic Response Function (HRF) that are simultaneously fitted to fMRI data using a Bayesian optimization algorithm. This algorithm also produces a model evidence score, providing a formal model comparison method for evaluating alternative NRFs. The HRF is characterized using previously established “Balloon” and BOLD signal models. We illustrate the method with two example applications based on fMRI studies of the auditory system. In the first, we estimate the time constants of repetition suppression and facilitation, and in the second we estimate the parameters of population receptive fields in a tonotopic mapping study. PMID:24847246

  12. Application of iterative robust model-based optimal experimental design for the calibration of biocatalytic models.

    PubMed

    Van Daele, Timothy; Gernaey, Krist V; Ringborg, Rolf H; Börner, Tim; Heintz, Søren; Van Hauwermeiren, Daan; Grey, Carl; Krühne, Ulrich; Adlercreutz, Patrick; Nopens, Ingmar

    2017-09-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimentation is not actively used to optimize the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω-transaminase catalyzed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is not only more accurate but also a computationally more expensive method. As a result, an important deviation between both approaches is found, confirming that linearization methods should be applied with care for nonlinear models. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1278-1293, 2017. © 2017 American Institute of Chemical Engineers.

  13. Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.

    PubMed

    King, Leandra; Wakeley, John

    2016-09-01

    We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.

  14. Performance Optimizing Adaptive Control with Time-Varying Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The control synthesis involves the design of a performance optimizing adaptive controller from a subset of control inputs. The resulting effect of the performance optimizing adaptive controller is to modify the initial reference model into a time-varying reference model which satisfies the performance optimization requirement obtained from an optimal control problem. The time-varying reference model modification is accomplished by the real-time solutions of the time-varying Riccati and Sylvester equations coupled with the least-squares parameter estimation of the sensitivities of the performance metric. The effectiveness of the proposed method is demonstrated by an application of maneuver load alleviation control for a flexible aircraft.

  15. On estimation of secret message length in LSB steganography in spatial domain

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav

    2004-06-01

    In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and then formulate the problem of determining the unknown message length as a simple optimization problem. The methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant estimator accuracy analysis using statistical image models.

  16. New charging strategy for lithium-ion batteries based on the integration of Taguchi method and state of charge estimation

    NASA Astrophysics Data System (ADS)

    Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay

    2015-01-01

    In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.

  17. Theoretical and experimental comparative analysis of beamforming methods for loudspeaker arrays under given performance constraints

    NASA Astrophysics Data System (ADS)

    Olivieri, Ferdinando; Fazi, Filippo Maria; Nelson, Philip A.; Shin, Mincheol; Fontana, Simone; Yue, Lang

    2016-07-01

    Methods for beamforming are available that provide the signals used to drive an array of sources for the implementation of systems for the so-called personal audio. In this work, performance of the delay-and-sum (DAS) method and of three widely used methods for optimal beamforming are compared by means of computer simulations and experiments in an anechoic environment using a linear array of sources with given constraints on quality of the reproduced field at the listener's position and limit to input energy to the array. Using the DAS method as a benchmark for performance, the frequency domain responses of the loudspeaker filters can be characterized in three regions. In the first region, at low frequencies, input signals designed with the optimal methods are identical and provide higher directivity performance than that of the DAS. In the second region, performance of the optimal methods are similar to the DAS method. The third region starts above the limit due to spatial aliasing. A method is presented to estimate the boundaries of these regions.

  18. Assessment of pollutant mean concentrations in the Yangtze estuary based on MSN theory.

    PubMed

    Ren, Jing; Gao, Bing-Bo; Fan, Hai-Mei; Zhang, Zhi-Hong; Zhang, Yao; Wang, Jin-Feng

    2016-12-15

    Reliable assessment of water quality is a critical issue for estuaries. Nutrient concentrations show significant spatial distinctions between areas under the influence of fresh-sea water interaction and anthropogenic effects. For this situation, given the limitations of general mean estimation approaches, a new method for surfaces with non-homogeneity (MSN) was applied to obtain optimized linear unbiased estimations of the mean nutrient concentrations in the study area in the Yangtze estuary from 2011 to 2013. Other mean estimation methods, including block Kriging (BK), simple random sampling (SS) and stratified sampling (ST) inference, were applied simultaneously for comparison. Their performance was evaluated by estimation error. The results show that MSN had the highest accuracy, while SS had the highest estimation error. ST and BK were intermediate in terms of their performance. Thus, MSN is an appropriate method that can be adopted to reduce the uncertainty of mean pollutant estimation in estuaries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Reference values assessment in a Mediterranean population for small dense low-density lipoprotein concentration isolated by an optimized precipitation method.

    PubMed

    Fernández-Cidón, Bárbara; Padró-Miquel, Ariadna; Alía-Ramos, Pedro; Castro-Castro, María José; Fanlo-Maresma, Marta; Dot-Bach, Dolors; Valero-Politi, José; Pintó-Sala, Xavier; Candás-Estébanez, Beatriz

    2017-01-01

    High serum concentrations of small dense low-density lipoprotein cholesterol (sd-LDL-c) particles are associated with risk of cardiovascular disease (CVD). Their clinical application has been hindered as a consequence of the laborious current method used for their quantification. Optimize a simple and fast precipitation method to isolate sd-LDL particles and establish a reference interval in a Mediterranean population. Forty-five serum samples were collected, and sd-LDL particles were isolated using a modified heparin-Mg 2+ precipitation method. sd-LDL-c concentration was calculated by subtracting high-density lipoprotein cholesterol (HDL-c) from the total cholesterol measured in the supernatant. This method was compared with the reference method (ultracentrifugation). Reference values were estimated according to the Clinical and Laboratory Standards Institute and The International Federation of Clinical Chemistry and Laboratory Medicine recommendations. sd-LDL-c concentration was measured in serums from 79 subjects with no lipid metabolism abnormalities. The Passing-Bablok regression equation is y = 1.52 (0.72 to 1.73) + 0.07 x (-0.1 to 0.13), demonstrating no significant statistical differences between the modified precipitation method and the ultracentrifugation reference method. Similarly, no differences were detected when considering only sd-LDL-c from dyslipidemic patients, since the modifications added to the precipitation method facilitated the proper sedimentation of triglycerides and other lipoproteins. The reference interval for sd-LDL-c concentration estimated in a Mediterranean population was 0.04-0.47 mmol/L. An optimization of the heparin-Mg 2+ precipitation method for sd-LDL particle isolation was performed, and reference intervals were established in a Spanish Mediterranean population. Measured values were equivalent to those obtained with the reference method, assuring its clinical application when tested in both normolipidemic and dyslipidemic subjects.

  20. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

Top