Sample records for optimal local approximation

  1. Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett

    2004-01-01

    Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.

  2. Local Approximation and Hierarchical Methods for Stochastic Optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the PJM Interconnect and show that it outperforms the baseline approach used in the industry.

  3. Solving the infeasible trust-region problem using approximations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott

    2004-07-01

    The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less

  4. Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods

    PubMed Central

    Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.

    2013-01-01

    Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822

  5. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  6. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  7. The optimized effective potential and the self-interaction correction in density functional theory: Application to molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garza, Jorge; Nichols, Jeffrey A.; Dixon, David A.

    2000-05-08

    The Krieger, Li, and Iafrate approximation to the optimized effective potential including the self-interaction correction for density functional theory has been implemented in a molecular code, NWChem, that uses Gaussian functions to represent the Kohn and Sham spin-orbitals. The differences between the implementation of the self-interaction correction in codes where planewaves are used with an optimized effective potential are discussed. The importance of the localization of the spin-orbitals to maximize the exchange-correlation of the self-interaction correction is discussed. We carried out exchange-only calculations to compare the results obtained with these approximations, and those obtained with the local spin density approximation,more » the generalized gradient approximation and Hartree-Fock theory. Interesting results for the energy difference (GAP) between the highest occupied molecular orbital, HOMO, and the lowest unoccupied molecular orbital, LUMO, (spin-orbital energies of closed shell atoms and molecules) using the optimized effective potential and the self-interaction correction have been obtained. The effect of the diffuse character of the basis set on the HOMO and LUMO eigenvalues at the various levels is discussed. Total energies obtained with the optimized effective potential and the self-interaction correction show that the exchange energy with these approximations is overestimated and this will be an important topic for future work. (c) 2000 American Institute of Physics.« less

  8. Construction of nested maximin designs based on successive local enumeration and modified novel global harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin

    2017-01-01

    Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.

  9. An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, B.G.

    1999-11-11

    The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.

  10. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  11. Generalization of the optimized-effective-potential model to include electron correlation: A variational derivation of the Sham-Schlueter equation for the exact exchange-correlation potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casida, M.E.

    1995-03-01

    The now classic optimized-effective-potential (OEP) approach of Sharp and Horton [Phys Rev. 90, 317 (1953)] and Talman and Shadwick [Phys. Rev. A 14, 36 (1976)] seeks the local potential that is variationally optimized to best approximate the Hartree-Fock exchange operator. The resulting OEP can be identified as the exchange potential of Kohn-Sham density-functional theory. The present work generalizes this OEP approach to treat the correlated case, and shows that the Kohn-Sham exchange-correlation potential is the variationally best local approximation to the exchange-correlation self-energy. This provides a variational derivation of the equation for the exact exchange-correlation potential that was derived bymore » Sham and Schlueter using a density condition. Implications for an approximate physical interpretation of the Kohn-Sham orbitals are discussesd. A correlated generalization of the Sharp-Horton--Krieger-Li-Iafrate [Phys Lett. A 146, 256 (1990)] approximation of the exchange potential is introduced in the quasiparticle limit.« less

  12. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.

  13. Recent advances in approximation concepts for optimum structural design

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.; Haftka, Raphael T.

    1991-01-01

    The basic approximation concepts used in structural optimization are reviewed. Some of the most recent developments in that area since the introduction of the concept in the mid-seventies are discussed. The paper distinguishes between local, medium-range, and global approximations; it covers functions approximations and problem approximations. It shows that, although the lack of comparative data established on reference test cases prevents an accurate assessment, there have been significant improvements. The largest number of developments have been in the areas of local function approximations and use of intermediate variable and response quantities. It also appears that some new methodologies are emerging which could greatly benefit from the introduction of new computer architecture.

  14. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  15. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  16. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  17. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks.

    PubMed

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-05-21

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods.

  18. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks

    PubMed Central

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-01-01

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods. PMID:29883410

  19. Local CC2 response method based on the Laplace transform: analytic energy gradients for ground and excited states.

    PubMed

    Ledermüller, Katrin; Schütz, Martin

    2014-04-28

    A multistate local CC2 response method for the calculation of analytic energy gradients with respect to nuclear displacements is presented for ground and electronically excited states. The gradient enables the search for equilibrium geometries of extended molecular systems. Laplace transform is used to partition the eigenvalue problem in order to obtain an effective singles eigenvalue problem and adaptive, state-specific local approximations. This leads to an approximation in the energy Lagrangian, which however is shown (by comparison with the corresponding gradient method without Laplace transform) to be of no concern for geometry optimizations. The accuracy of the local approximation is tested and the efficiency of the new code is demonstrated by application calculations devoted to a photocatalytic decarboxylation process of present interest.

  20. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  1. Analytical approximation schemes for solving exact renormalization group equations in the local potential approximation

    NASA Astrophysics Data System (ADS)

    Bervillier, C.; Boisseau, B.; Giacomini, H.

    2008-02-01

    The relation between the Wilson-Polchinski and the Litim optimized ERGEs in the local potential approximation is studied with high accuracy using two different analytical approaches based on a field expansion: a recently proposed genuine analytical approximation scheme to two-point boundary value problems of ordinary differential equations, and a new one based on approximating the solution by generalized hypergeometric functions. A comparison with the numerical results obtained with the shooting method is made. A similar accuracy is reached in each case. Both two methods appear to be more efficient than the usual field expansions frequently used in the current studies of ERGEs (in particular for the Wilson-Polchinski case in the study of which they fail).

  2. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  3. Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.

    PubMed

    Haber, Aleksandar; Verhaegen, Michel

    2016-11-15

    We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.

  4. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  5. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    PubMed

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  6. The effective local potential method: Implementation for molecules and relation to approximate optimized effective potential techniques

    NASA Astrophysics Data System (ADS)

    Izmaylov, Artur F.; Staroverov, Viktor N.; Scuseria, Gustavo E.; Davidson, Ernest R.; Stoltz, Gabriel; Cancès, Eric

    2007-02-01

    We have recently formulated a new approach, named the effective local potential (ELP) method, for calculating local exchange-correlation potentials for orbital-dependent functionals based on minimizing the variance of the difference between a given nonlocal potential and its desired local counterpart [V. N. Staroverov et al., J. Chem. Phys. 125, 081104 (2006)]. Here we show that under a mildly simplifying assumption of frozen molecular orbitals, the equation defining the ELP has a unique analytic solution which is identical with the expression arising in the localized Hartree-Fock (LHF) and common energy denominator approximations (CEDA) to the optimized effective potential. The ELP procedure differs from the CEDA and LHF in that it yields the target potential as an expansion in auxiliary basis functions. We report extensive calculations of atomic and molecular properties using the frozen-orbital ELP method and its iterative generalization to prove that ELP results agree with the corresponding LHF and CEDA values, as they should. Finally, we make the case for extending the iterative frozen-orbital ELP method to full orbital relaxation.

  7. On optimal infinite impulse response edge detection filters

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1991-01-01

    The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.

  8. Computing the Partition Function for Kinetically Trapped RNA Secondary Structures

    PubMed Central

    Lorenz, William A.; Clote, Peter

    2011-01-01

    An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in time and space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures – indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy. Web server and source code available at http://bioinformatics.bc.edu/clotelab/RNAlocopt/. PMID:21297972

  9. Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.

    PubMed

    Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam

    2018-06-01

    The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.

  10. Capturing nonlocal interaction effects in the Hubbard model: Optimal mappings and limits of applicability

    NASA Astrophysics Data System (ADS)

    van Loon, E. G. C. P.; Schüler, M.; Katsnelson, M. I.; Wehling, T. O.

    2016-10-01

    We investigate the Peierls-Feynman-Bogoliubov variational principle to map Hubbard models with nonlocal interactions to effective models with only local interactions. We study the renormalization of the local interaction induced by nearest-neighbor interaction and assess the quality of the effective Hubbard models in reproducing observables of the corresponding extended Hubbard models. We compare the renormalization of the local interactions as obtained from numerically exact determinant quantum Monte Carlo to approximate but more generally applicable calculations using dual boson, dynamical mean field theory, and the random phase approximation. These more approximate approaches are crucial for any application with real materials in mind. Furthermore, we use the dual boson method to calculate observables of the extended Hubbard models directly and benchmark these against determinant quantum Monte Carlo simulations of the effective Hubbard model.

  11. Comparison of polynomial approximations and artificial neural nets for response surfaces in engineering optimization

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1991-01-01

    Engineering optimization problems involve minimizing some function subject to constraints. In areas such as aircraft optimization, the constraint equations may be from numerous disciplines such as transfer of information between these disciplines and the optimization algorithm. They are also suited to problems which may require numerous re-optimizations such as in multi-objective function optimization or to problems where the design space contains numerous local minima, thus requiring repeated optimizations from different initial designs. Their use has been limited, however, by the fact that development of response surfaces randomly selected or preselected points in the design space. Thus, they have been thought to be inefficient compared to algorithms to the optimum solution. A development has taken place in the last several years which may effect the desirability of using response surfaces. It may be possible that artificial neural nets are more efficient in developing response surfaces than polynomial approximations which have been used in the past. This development is the concern of the work.

  12. Oscillator strengths, first-order properties, and nuclear gradients for local ADC(2).

    PubMed

    Schütz, Martin

    2015-06-07

    We describe theory and implementation of oscillator strengths, orbital-relaxed first-order properties, and nuclear gradients for the local algebraic diagrammatic construction scheme through second order. The formalism is derived via time-dependent linear response theory based on a second-order unitary coupled cluster model. The implementation presented here is a modification of our previously developed algorithms for Laplace transform based local time-dependent coupled cluster linear response (CC2LR); the local approximations thus are state specific and adaptive. The symmetry of the Jacobian leads to considerable simplifications relative to the local CC2LR method; as a result, a gradient evaluation is about four times less expensive. Test calculations show that in geometry optimizations, usually very similar geometries are obtained as with the local CC2LR method (provided that a second-order method is applicable). As an exemplary application, we performed geometry optimizations on the low-lying singlet states of chlorophyllide a.

  13. Fast-match on particle swarm optimization with variant system mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Yuehuang; Fang, Xin; Chen, Jie

    2018-03-01

    Fast-Match is a fast and effective algorithm for approximate template matching under 2D affine transformations, which can match the target with maximum similarity without knowing the target gesture. It depends on the minimum Sum-of-Absolute-Differences (SAD) error to obtain the best affine transformation. The algorithm is widely used in the field of matching images because of its fastness and robustness. In this paper, our approach is to search an approximate affine transformation over Particle Swarm Optimization (PSO) algorithm. We treat each potential transformation as a particle that possesses memory function. Each particle is given a random speed and flows throughout the 2D affine transformation space. To accelerate the algorithm and improve the abilities of seeking the global excellent result, we have introduced the variant system mechanism on this basis. The benefit is that we can avoid matching with huge amount of potential transformations and falling into local optimal condition, so that we can use a few transformations to approximate the optimal solution. The experimental results prove that our method has a faster speed and a higher accuracy performance with smaller affine transformation space.

  14. Learning and inference using complex generative models in a spatial localization task.

    PubMed

    Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N

    2016-01-01

    A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.

  15. The Hartree product and the description of local and global quantities in atomic systems: A study within Kohn-Sham theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garza, Jorge; Nichols, Jeffrey A.; Dixon, David A.

    2000-01-15

    The Hartree product is analyzed in the context of Kohn-Sham theory. The differential equations that emerge from this theory are solved with the optimized effective potential using the Krieger, Li, and Iafrate approximation, in order to get a local potential as required by the ordinary Kohn-Sham procedure. Because the diagonal terms of the exact exchange energy are included in Hartree theory, it is self-interaction free and the exchange potential has the proper asymptotic behavior. We have examined the impact of this correct asymptotic behavior on local and global properties using this simple model to approximate the exchange energy. Local quantities,more » such as the exchange potential and the average local electrostatic potential are used to examine whether the shell structure in an atom is revealed by this theory. Global quantities, such as the highest occupied orbital energy (related to the ionization potential) and the exchange energy are also calculated. These quantities are contrasted with those obtained from calculations with the local density approximation, the generalized gradient approximation, and the self-interaction correction approach proposed by Perdew and Zunger. We conclude that the main characteristics in an atomic system are preserved with the Hartree theory. In particular, the behavior of the exchange potential obtained in this theory is similar to those obtained within other Kohn-Sham approximations. (c) 2000 American Institute of Physics.« less

  16. Speed and convergence properties of gradient algorithms for optimization of IMRT.

    PubMed

    Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe

    2004-05-01

    Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.

  17. Time-local equation for exact time-dependent optimized effective potential in time-dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel; Chu, Shih-I.

    2017-04-01

    Solving and analyzing the exact time-dependent optimized effective potential (TDOEP) integral equation has been a longstanding challenge due to its highly nonlinear and nonlocal nature. To meet the challenge, we derive an exact time-local TDOEP equation that admits a unique real-time solution in terms of time-dependent Kohn-Sham orbitals and effective memory orbitals. For illustration, the dipole evolution dynamics of a one-dimension-model chain of hydrogen atoms is numerically evaluated and examined to demonstrate the utility of the proposed time-local formulation. Importantly, it is shown that the zero-force theorem, violated by the time-dependent Krieger-Li-Iafrate approximation, is fulfilled in the current TDOEP framework. This work was partially supported by DOE.

  18. Optimal Bandwidth for Multitaper Spectrum Estimation

    DOE PAGES

    Haley, Charlotte L.; Anitescu, Mihai

    2017-07-04

    A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less

  19. Improved distorted wave theory with the localized virial conditions

    NASA Astrophysics Data System (ADS)

    Hahn, Y. K.; Zerrad, E.

    2009-12-01

    The distorted wave theory is operationally improved to treat the full collision amplitude, such that the corrections to the distorted wave Born amplitude can be systematically calculated. The localized virial conditions provide the tools necessary to test the quality of successive approximations at each stage and to optimize the solution. The details of the theoretical procedure are explained in concrete terms using a collisional ionization model and variational trial functions. For the first time, adjustable parameters associated with an approximate scattering solution can be fully determined by the theory. A small number of linear parameters are introduced to examine the convergence property and the effectiveness of the new approach.

  20. Alternatives to Pyrotechnic Distress Signals; Additional Signal Evaluation

    DTIC Science & Technology

    2017-06-01

    conducted a series of laboratory experiments designed to determine the optimal signal color and temporal pattern for identification against a variety of...practice” trials at approximately 2030 local time and began the actual Test 1 observation trials at approximately 2130. The series of trials finished at...Lewandowski , 860-271-2692, email: M.J.Lewandowski@uscg.mil 16. Abstract (MAXIMUM 200 WORDS) This report is the fourth in a series that details work

  1. Alternative derivation of an exchange-only density-functional optimized effective potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joubert, D. P.

    2007-10-15

    An alternative derivation of the exchange-only density-functional optimized effective potential equation is given. It is shown that the localized Hartree-Fock-common energy denominator Green's function approximation (LHF-CEDA) for the density-functional exchange potential proposed independently by Della Sala and Goerling [J. Chem. Phys. 115, 5718 (2001)] and Gritsenko and Baerends [Phys. Rev. A 64, 42506 (2001)] can be derived as an approximation to the OEP exchange potential in a similar way that the KLI approximation [Phys. Rev. A 45, 5453 (1992)] was derived. An exact expression for the correction term to the LHF-CEDA approximation can thus be found. The correction term canmore » be expressed in terms of the first-order perturbation-theory many-electron wave function shift when the Kohn-Sham Hamiltonian is subjected to a perturbation equal to the difference between the density-functional exchange potential and the Hartree-Fock nonlocal potential, expressed in terms of the Kohn-Sham orbitals. An explicit calculation shows that the density weighted mean of the correction term is zero, confirming that the LHF-CEDA approximation can be interpreted as a mean-field approximation. The corrected LHF-CEDA equation and the optimized effective potential equation are shown to be identical, with information distributed differently between terms in the equations. For a finite system the correction term falls off at least as fast as 1/r{sup 4} for large r.« less

  2. Alternative derivation of an exchange-only density-functional optimized effective potential

    NASA Astrophysics Data System (ADS)

    Joubert, D. P.

    2007-10-01

    An alternative derivation of the exchange-only density-functional optimized effective potential equation is given. It is shown that the localized Hartree-Fock common energy denominator Green’s function approximation (LHF-CEDA) for the density-functional exchange potential proposed independently by Della Sala and Görling [J. Chem. Phys. 115, 5718 (2001)] and Gritsenko and Baerends [Phys. Rev. A 64, 42506 (2001)] can be derived as an approximation to the OEP exchange potential in a similar way that the KLI approximation [Phys. Rev. A 45, 5453 (1992)] was derived. An exact expression for the correction term to the LHF-CEDA approximation can thus be found. The correction term can be expressed in terms of the first-order perturbation-theory many-electron wave function shift when the Kohn-Sham Hamiltonian is subjected to a perturbation equal to the difference between the density-functional exchange potential and the Hartree-Fock nonlocal potential, expressed in terms of the Kohn-Sham orbitals. An explicit calculation shows that the density weighted mean of the correction term is zero, confirming that the LHF-CEDA approximation can be interpreted as a mean-field approximation. The corrected LHF-CEDA equation and the optimized effective potential equation are shown to be identical, with information distributed differently between terms in the equations. For a finite system the correction term falls off at least as fast as 1/r4 for large r .

  3. Constrained optimization of sequentially generated entangled multiqubit states

    NASA Astrophysics Data System (ADS)

    Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique

    2009-08-01

    We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.

  4. Approximation algorithms for a genetic diagnostics problem.

    PubMed

    Kosaraju, S R; Schäffer, A A; Biesecker, L G

    1998-01-01

    We define and study a combinatorial problem called WEIGHTED DIAGNOSTIC COVER (WDC) that models the use of a laboratory technique called genotyping in the diagnosis of an important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the greedy heuristic has been extensively studied. We prove worst-case performance bounds on the greedy heuristic for WDC and for another heuristic we call directional greedy. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by greedy and dir-greedy and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.

  5. The trust-region self-consistent field method in Kohn-Sham density-functional theory.

    PubMed

    Thøgersen, Lea; Olsen, Jeppe; Köhn, Andreas; Jørgensen, Poul; Sałek, Paweł; Helgaker, Trygve

    2005-08-15

    The trust-region self-consistent field (TRSCF) method is extended to the optimization of the Kohn-Sham energy. In the TRSCF method, both the Roothaan-Hall step and the density-subspace minimization step are replaced by trust-region optimizations of local approximations to the Kohn-Sham energy, leading to a controlled, monotonic convergence towards the optimized energy. Previously the TRSCF method has been developed for optimization of the Hartree-Fock energy, which is a simple quadratic function in the density matrix. However, since the Kohn-Sham energy is a nonquadratic function of the density matrix, the local energy functions must be generalized for use with the Kohn-Sham model. Such a generalization, which contains the Hartree-Fock model as a special case, is presented here. For comparison, a rederivation of the popular direct inversion in the iterative subspace (DIIS) algorithm is performed, demonstrating that the DIIS method may be viewed as a quasi-Newton method, explaining its fast local convergence. In the global region the convergence behavior of DIIS is less predictable. The related energy DIIS technique is also discussed and shown to be inappropriate for the optimization of the Kohn-Sham energy.

  6. Optimal sparse approximation with integrate and fire neurons.

    PubMed

    Shapero, Samuel; Zhu, Mengchen; Hasler, Jennifer; Rozell, Christopher

    2014-08-01

    Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.

  7. Event-Triggered Distributed Control of Nonlinear Interconnected Systems Using Online Reinforcement Learning With Exploration.

    PubMed

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-09-07

    In this paper, a distributed control scheme for an interconnected system composed of uncertain input affine nonlinear subsystems with event triggered state feedback is presented by using a novel hybrid learning scheme-based approximate dynamic programming with online exploration. First, an approximate solution to the Hamilton-Jacobi-Bellman equation is generated with event sampled neural network (NN) approximation and subsequently, a near optimal control policy for each subsystem is derived. Artificial NNs are utilized as function approximators to develop a suite of identifiers and learn the dynamics of each subsystem. The NN weight tuning rules for the identifier and event-triggering condition are derived using Lyapunov stability theory. Taking into account, the effects of NN approximation of system dynamics and boot-strapping, a novel NN weight update is presented to approximate the optimal value function. Finally, a novel strategy to incorporate exploration in online control framework, using identifiers, is introduced to reduce the overall cost at the expense of additional computations during the initial online learning phase. System states and the NN weight estimation errors are regulated and local uniformly ultimately bounded results are achieved. The analytical results are substantiated using simulation studies.

  8. Enhanced Approximate Nearest Neighbor via Local Area Focused Search.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzales, Antonio; Blazier, Nicholas Paul

    Approximate Nearest Neighbor (ANN) algorithms are increasingly important in machine learning, data mining, and image processing applications. There is a large family of space- partitioning ANN algorithms, such as randomized KD-Trees, that work well in practice but are limited by an exponential increase in similarity comparisons required to optimize recall. Additionally, they only support a small set of similarity metrics. We present Local Area Fo- cused Search (LAFS), a method that enhances the way queries are performed using an existing ANN index. Instead of a single query, LAFS performs a number of smaller (fewer similarity comparisons) queries and focuses onmore » a local neighborhood which is refined as candidates are identified. We show that our technique improves performance on several well known datasets and is easily extended to general similarity metrics using kernel projection techniques.« less

  9. The NonConforming Virtual Element Method for the Stokes Equations

    DOE PAGES

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-01-01

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  11. A nonlinear optimal control approach for chaotic finance dynamics

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Loia, V.; Tommasetti, A.; Troisi, O.

    2017-11-01

    A new nonlinear optimal control approach is proposed for stabilization of the dynamics of a chaotic finance model. The dynamic model of the financial system, which expresses interaction between the interest rate, the investment demand, the price exponent and the profit margin, undergoes approximate linearization round local operating points. These local equilibria are defined at each iteration of the control algorithm and consist of the present value of the systems state vector and the last value of the control inputs vector that was exerted on it. The approximate linearization makes use of Taylor series expansion and of the computation of the associated Jacobian matrices. The truncation of higher order terms in the Taylor series expansion is considered to be a modelling error that is compensated by the robustness of the control loop. As the control algorithm runs, the temporary equilibrium is shifted towards the reference trajectory and finally converges to it. The control method needs to compute an H-infinity feedback control law at each iteration, and requires the repetitive solution of an algebraic Riccati equation. Through Lyapunov stability analysis it is shown that an H-infinity tracking performance criterion holds for the control loop. This implies elevated robustness against model approximations and external perturbations. Moreover, under moderate conditions the global asymptotic stability of the control loop is proven.

  12. On Time Delay Margin Estimation for Adaptive Control and Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2011-01-01

    This paper presents methods for estimating time delay margin for adaptive control of input delay systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent an adaptive law by a locally bounded linear approximation within a small time window. The time delay margin of this input delay system represents a local stability measure and is computed analytically by three methods: Pade approximation, Lyapunov-Krasovskii method, and the matrix measure method. These methods are applied to the standard model-reference adaptive control, s-modification adaptive law, and optimal control modification adaptive law. The windowing analysis results in non-unique estimates of the time delay margin since it is dependent on the length of a time window and parameters which vary from one time window to the next. The optimal control modification adaptive law overcomes this limitation in that, as the adaptive gain tends to infinity and if the matched uncertainty is linear, then the closed-loop input delay system tends to a LTI system. A lower bound of the time delay margin of this system can then be estimated uniquely without the need for the windowing analysis. Simulation results demonstrates the feasibility of the bounded linear stability method for time delay margin estimation.

  13. Locality-preserving sparse representation-based classification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  14. Kalman Filters for Time Delay of Arrival-Based Source Localization

    NASA Astrophysics Data System (ADS)

    Klee, Ulrich; Gehrig, Tobias; McDonough, John

    2006-12-01

    In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.

  15. Fabrication and evaluation of plasmonic light-emitting diodes with thin p-type layer and localized Ag particles embedded by ITO

    NASA Astrophysics Data System (ADS)

    Okada, N.; Morishita, N.; Mori, A.; Tsukada, T.; Tateishi, K.; Okamoto, K.; Tadatomo, K.

    2017-04-01

    Light-emitting diodes (LEDs) have been demonstrated with a thin p-type layer using the plasmonic effect. Optimal LED device operation was found when using a 20-nm-thick p+-GaN layer. Ag of different thicknesses was deposited on the thin p-type layer and annealed to form the localized Ag particles. The localized Ag particles were embedded by indium tin oxide to form a p-type electrode in the LED structure. By optimization of the plasmonic LED, the significant electroluminescence enhancement was observed when the thickness of Ag was 9.5 nm. Both upward and downward electroluminescence intensities were improved, and the external quantum efficiency was approximately double that of LEDs without the localized Ag particles. The time-resolved photoluminescence (PL) decay time for the LED with the localized Ag particles was shorter than that without the localized Ag particles. The faster PL decay time should cause the increase in internal quantum efficiency by adopting the localized Ag particles. To validate the localized surface plasmon resonance coupling effect, the absorption of the LEDs was investigated experimentally and using simulations.

  16. A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352

    2015-09-01

    In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less

  17. Event-Triggered Distributed Approximate Optimal State and Output Control of Affine Nonlinear Interconnected Systems.

    PubMed

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-06-08

    This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.

  18. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  19. The Quantum Approximation Optimization Algorithm for MaxCut: A Fermionic View

    NASA Technical Reports Server (NTRS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2017-01-01

    Farhi et al. recently proposed a class of quantum algorithms, the Quantum Approximate Optimization Algorithm (QAOA), for approximately solving combinatorial optimization problems. A level-p QAOA circuit consists of steps in which a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2p times for which these two Hamiltonians are applied are the parameters of the algorithm. As p increases, however, the parameter search space grows quickly. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here, we analytically and numerically study parameter setting for QAOA applied to MAXCUT. For level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MAXCUT, the Ring of Disagrees, or the 1D antiferromagnetic ring, we provide an analysis for arbitrarily high level. Using a Fermionic representation, the evolution of the system under QAOA translates into quantum optimal control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of QAOA for any p. It also greatly simplifies numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional sub-manifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  20. First-principles study of low-spin LaCoO3 with structurally consistent Hubbard U

    NASA Astrophysics Data System (ADS)

    Hsu, H.; Umemoto, K.; Cococcioni, M.; Wentzcovitch, R.

    2008-12-01

    We use the local density approximation + Hubbard U (LDA+U) method to calculate the structural and electronic properties of low-spin LaCoO3. The Hubbard U is obtained by first principles and consistent with each fully-optimized atomic structure at different pressures. With structurally consistent U, the fully-optimized atomic structure agrees with experimental data better than the calculations with fixed or vanishing U. A discussion on how the Hubbard U affects the electronic and atomic structure of LaCoO3 is also given.

  1. Simplification of the time-dependent generalized self-interaction correction method using two sets of orbitals: Application of the optimized effective potential formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messud, J.; Dinh, P. M.; Suraud, Eric

    2009-10-15

    We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent 'generalized SIC-OEP'. A straightforward approximation, using the spatial localization of one set of orbitals, leads to the 'generalized SIC-Slater' formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.

  2. Simplification of the time-dependent generalized self-interaction correction method using two sets of orbitals: Application of the optimized effective potential formalism

    NASA Astrophysics Data System (ADS)

    Messud, J.; Dinh, P. M.; Reinhard, P.-G.; Suraud, Eric

    2009-10-01

    We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent “generalized SIC-OEP.” A straightforward approximation, using the spatial localization of one set of orbitals, leads to the “generalized SIC-Slater” formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.

  3. Computational benefits using artificial intelligent methodologies for the solution of an environmental design problem: saltwater intrusion.

    PubMed

    Papadopoulou, Maria P; Nikolos, Ioannis K; Karatzas, George P

    2010-01-01

    Artificial Neural Networks (ANNs) comprise a powerful tool to approximate the complicated behavior and response of physical systems allowing considerable reduction in computation time during time-consuming optimization runs. In this work, a Radial Basis Function Artificial Neural Network (RBFN) is combined with a Differential Evolution (DE) algorithm to solve a water resources management problem, using an optimization procedure. The objective of the optimization scheme is to cover the daily water demand on the coastal aquifer east of the city of Heraklion, Crete, without reducing the subsurface water quality due to seawater intrusion. The RBFN is utilized as an on-line surrogate model to approximate the behavior of the aquifer and to replace some of the costly evaluations of an accurate numerical simulation model which solves the subsurface water flow differential equations. The RBFN is used as a local approximation model in such a way as to maintain the robustness of the DE algorithm. The results of this procedure are compared to the corresponding results obtained by using the Simplex method and by using the DE procedure without the surrogate model. As it is demonstrated, the use of the surrogate model accelerates the convergence of the DE optimization procedure and additionally provides a better solution at the same number of exact evaluations, compared to the original DE algorithm.

  4. Orbital dependent functionals: An atom projector augmented wave method implementation

    NASA Astrophysics Data System (ADS)

    Xu, Xiao

    This thesis explores the formulation and numerical implementation of orbital dependent exchange-correlation functionals within electronic structure calculations. These orbital-dependent exchange-correlation functionals have recently received renewed attention as a means to improve the physical representation of electron interactions within electronic structure calculations. In particular, electron self-interaction terms can be avoided. In this thesis, an orbital-dependent functional is considered in the context of Hartree-Fock (HF) theory as well as the Optimized Effective Potential (OEP) method and the approximate OEP method developed by Krieger, Li, and Iafrate, known as the KLI approximation. In this thesis, the Fock exchange term is used as a simple well-defined example of an orbital-dependent functional. The Projected Augmented Wave (PAW) method developed by P. E. Blochl has proven to be accurate and efficient for electronic structure calculations for local and semi-local functions because of its accurate evaluation of interaction integrals by controlling multiple moments. We have extended the PAW method to treat orbital-dependent functionals in Hartree-Fock theory and the Optimized Effective Potential method, particularly in the KLI approximation. In the course of study we develop a frozen-core orbital approximation that accurately treats the core electron contributions for above three methods. The main part of the thesis focuses on the treatment of spherical atoms. We have investigated the behavior of PAW-Hartree Fock and PAW-KLI basis, projector, and pseudopotential functions for several elements throughout the periodic table. We have also extended the formalism to the treatment of solids in a plane wave basis and implemented PWPAW-KLI code, which will appear in future publications.

  5. Dynamic approximate entropy electroanatomic maps detect rotors in a simulated atrial fibrillation model.

    PubMed

    Ugarte, Juan P; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John

    2014-01-01

    There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping.

  6. Microgrid Optimal Scheduling With Chance-Constrained Islanding Capability

    DOE PAGES

    Liu, Guodong; Starke, Michael R.; Xiao, B.; ...

    2017-01-13

    To facilitate the integration of variable renewable generation and improve the resilience of electricity sup-ply in a microgrid, this paper proposes an optimal scheduling strategy for microgrid operation considering constraints of islanding capability. A new concept, probability of successful islanding (PSI), indicating the probability that a microgrid maintains enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation after instantaneously islanding from the main grid, is developed. The PSI is formulated as mixed-integer linear program using multi-interval approximation taking into account the probability distributions of forecast errors of wind, PV and load. With themore » goal of minimizing the total operating cost while preserving user specified PSI, a chance-constrained optimization problem is formulated for the optimal scheduling of mirogrids and solved by mixed integer linear programming (MILP). Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling strategy. Lastly, we verify the relationship between PSI and various factors.« less

  7. Stochastic Evolutionary Algorithms for Planning Robot Paths

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard

    2006-01-01

    A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.

  8. Estimation of reflectance from camera responses by the regularized local linear model.

    PubMed

    Zhang, Wei-Feng; Tang, Gongguo; Dai, Dao-Qing; Nehorai, Arye

    2011-10-01

    Because of the limited approximation capability of using fixed basis functions, the performance of reflectance estimation obtained by traditional linear models will not be optimal. We propose an approach based on the regularized local linear model. Our approach performs efficiently and knowledge of the spectral power distribution of the illuminant and the spectral sensitivities of the camera is not needed. Experimental results show that the proposed method performs better than some well-known methods in terms of both reflectance error and colorimetric error. © 2011 Optical Society of America

  9. Efficiencies of joint non-local update moves in Monte Carlo simulations of coarse-grained polymers

    NASA Astrophysics Data System (ADS)

    Austin, Kieran S.; Marenz, Martin; Janke, Wolfhard

    2018-03-01

    In this study four update methods are compared in their performance in a Monte Carlo simulation of polymers in continuum space. The efficiencies of the update methods and combinations thereof are compared with the aid of the autocorrelation time with a fixed (optimal) acceptance ratio. Results are obtained for polymer lengths N = 14, 28 and 42 and temperatures below, at and above the collapse transition. In terms of autocorrelation, the optimal acceptance ratio is approximately 0.4. Furthermore, an overview of the step sizes of the update methods that correspond to this optimal acceptance ratio is given. This shall serve as a guide for future studies that rely on efficient computer simulations.

  10. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  11. Optimization of spatiotemporally fractionated radiotherapy treatments with bounds on the achievable benefit

    NASA Astrophysics Data System (ADS)

    Gaddy, Melissa R.; Yıldız, Sercan; Unkelbach, Jan; Papp, Dávid

    2018-01-01

    Spatiotemporal fractionation schemes, that is, treatments delivering different dose distributions in different fractions, can potentially lower treatment side effects without compromising tumor control. This can be achieved by hypofractionating parts of the tumor while delivering approximately uniformly fractionated doses to the surrounding tissue. Plan optimization for such treatments is based on biologically effective dose (BED); however, this leads to computationally challenging nonconvex optimization problems. Optimization methods that are in current use yield only locally optimal solutions, and it has hitherto been unclear whether these plans are close to the global optimum. We present an optimization framework to compute rigorous bounds on the maximum achievable normal tissue BED reduction for spatiotemporal plans. The approach is demonstrated on liver tumors, where the primary goal is to reduce mean liver BED without compromising any other treatment objective. The BED-based treatment plan optimization problems are formulated as quadratically constrained quadratic programming (QCQP) problems. First, a conventional, uniformly fractionated reference plan is computed using convex optimization. Then, a second, nonconvex, QCQP model is solved to local optimality to compute a spatiotemporally fractionated plan that minimizes mean liver BED, subject to the constraints that the plan is no worse than the reference plan with respect to all other planning goals. Finally, we derive a convex relaxation of the second model in the form of a semidefinite programming problem, which provides a rigorous lower bound on the lowest achievable mean liver BED. The method is presented on five cases with distinct geometries. The computed spatiotemporal plans achieve 12-35% mean liver BED reduction over the optimal uniformly fractionated plans. This reduction corresponds to 79-97% of the gap between the mean liver BED of the uniform reference plans and our lower bounds on the lowest achievable mean liver BED. The results indicate that spatiotemporal treatments can achieve substantial reductions in normal tissue dose and BED, and that local optimization techniques provide high-quality plans that are close to realizing the maximum potential normal tissue dose reduction.

  12. Trajectory Optimization Using Adjoint Method and Chebyshev Polynomial Approximation for Minimizing Fuel Consumption During Climb

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe

    2013-01-01

    This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.

  13. Dynamic Approximate Entropy Electroanatomic Maps Detect Rotors in a Simulated Atrial Fibrillation Model

    PubMed Central

    Ugarte, Juan P.; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John

    2014-01-01

    There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping. PMID:25489858

  14. GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING

    PubMed Central

    Liu, Hongcheng; Yao, Tao; Li, Runze

    2015-01-01

    This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126

  15. Aerodynamic design and optimization in one shot

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.

    1992-01-01

    This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.

  16. Performance optimization of an MHD generator with physical constraints

    NASA Technical Reports Server (NTRS)

    Pian, C. C. P.; Seikel, G. R.; Smith, J. M.

    1979-01-01

    A technique has been described which optimizes the power out of a Faraday MHD generator operating under a prescribed set of electrical and magnetic constraints. The method does not rely on complicated numerical optimization techniques. Instead the magnetic field and the electrical loading are adjusted at each streamwise location such that the resultant generator design operates at the most limiting of the cited stress levels. The simplicity of the procedure makes it ideal for optimizing generator designs for system analysis studies of power plants. The resultant locally optimum channel designs are, however, not necessarily the global optimum designs. The results of generator performance calculations are presented for an approximately 2000 MWe size plant. The difference between the maximum power generator design and the optimal design which maximizes net MHD power are described. The sensitivity of the generator performance to the various operational parameters are also presented.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr; Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics

    Using the probabilistic language of conditional expectations, we reformulate the force matching method for coarse-graining of molecular systems as a projection onto spaces of coarse observables. A practical outcome of this probabilistic description is the link of the force matching method with thermodynamic integration. This connection provides a way to systematically construct a local mean force and to optimally approximate the potential of mean force through force matching. We introduce a generalized force matching condition for the local mean force in the sense that allows the approximation of the potential of mean force under both linear and non-linear coarse grainingmore » mappings (e.g., reaction coordinates, end-to-end length of chains). Furthermore, we study the equivalence of force matching with relative entropy minimization which we derive for general non-linear coarse graining maps. We present in detail the generalized force matching condition through applications to specific examples in molecular systems.« less

  18. The lowest-order weak Galerkin finite element method for the Darcy equation on quadrilateral and hybrid meshes

    NASA Astrophysics Data System (ADS)

    Liu, Jiangguo; Tavener, Simon; Wang, Zhuoran

    2018-04-01

    This paper investigates the lowest-order weak Galerkin finite element method for solving the Darcy equation on quadrilateral and hybrid meshes consisting of quadrilaterals and triangles. In this approach, the pressure is approximated by constants in element interiors and on edges. The discrete weak gradients of these constant basis functions are specified in local Raviart-Thomas spaces, specifically RT0 for triangles and unmapped RT[0] for quadrilaterals. These discrete weak gradients are used to approximate the classical gradient when solving the Darcy equation. The method produces continuous normal fluxes and is locally mass-conservative, regardless of mesh quality, and has optimal order convergence in pressure, velocity, and normal flux, when the quadrilaterals are asymptotically parallelograms. Implementation is straightforward and results in symmetric positive-definite discrete linear systems. We present numerical experiments and comparisons with other existing methods.

  19. Surgical Site Infiltration for Abdominal Surgery: A Novel Neuroanatomical-based Approach

    PubMed Central

    Janis, Jeffrey E.; Haas, Eric M.; Ramshaw, Bruce J.; Nihira, Mikio A.; Dunkin, Brian J.

    2016-01-01

    Background: Provision of optimal postoperative analgesia should facilitate postoperative ambulation and rehabilitation. An optimal multimodal analgesia technique would include the use of nonopioid analgesics, including local/regional analgesic techniques such as surgical site local anesthetic infiltration. This article presents a novel approach to surgical site infiltration techniques for abdominal surgery based upon neuroanatomy. Methods: Literature searches were conducted for studies reporting the neuroanatomical sources of pain after abdominal surgery. Also, studies identified by preceding search were reviewed for relevant publications and manually retrieved. Results: Based on neuroanatomy, an optimal surgical site infiltration technique would consist of systematic, extensive, meticulous administration of local anesthetic into the peritoneum (or preperitoneum), subfascial, and subdermal tissue planes. The volume of local anesthetic would depend on the size of the incision such that 1 to 1.5 mL is injected every 1 to 2 cm of surgical incision per layer. It is best to infiltrate with a 22-gauge, 1.5-inch needle. The needle is inserted approximately 0.5 to 1 cm into the tissue plane, and local anesthetic solution is injected while slowly withdrawing the needle, which should reduce the risk of intravascular injection. Conclusions: Meticulous, systematic, and extensive surgical site local anesthetic infiltration in the various tissue planes including the peritoneal, musculofascial, and subdermal tissues, where pain foci originate, provides excellent postoperative pain relief. This approach should be combined with use of other nonopioid analgesics with opioids reserved for rescue. Further well-designed studies are necessary to assess the analgesic efficacy of the proposed infiltration technique. PMID:28293525

  20. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  1. Quantum approximate optimization algorithm for MaxCut: A fermionic view

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2018-02-01

    Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028; arXiv:1412.6062; arXiv:1602.07674). A level-p QAOA circuit consists of p steps; in each step a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2 p times for which these two Hamiltonians are applied are the parameters of the algorithm, which are to be optimized classically for the best performance. As p increases, parameter optimization becomes inefficient due to the curse of dimensionality. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here we analytically and numerically study parameter setting for the QAOA applied to MaxCut. For the level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MaxCut, the "ring of disagrees," or the one-dimensional antiferromagnetic ring, we provide an analysis for an arbitrarily high level. Using a fermionic representation, the evolution of the system under the QAOA translates into quantum control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of the QAOA for any p . It also greatly simplifies the numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional submanifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  2. Neural mechanism of optimal limb coordination in crustacean swimming

    PubMed Central

    Zhang, Calvin; Guy, Robert D.; Mulloney, Brian; Zhang, Qinghai; Lewis, Timothy J.

    2014-01-01

    A fundamental challenge in neuroscience is to understand how biologically salient motor behaviors emerge from properties of the underlying neural circuits. Crayfish, krill, prawns, lobsters, and other long-tailed crustaceans swim by rhythmically moving limbs called swimmerets. Over the entire biological range of animal size and paddling frequency, movements of adjacent swimmerets maintain an approximate quarter-period phase difference with the more posterior limbs leading the cycle. We use a computational fluid dynamics model to show that this frequency-invariant stroke pattern is the most effective and mechanically efficient paddling rhythm across the full range of biologically relevant Reynolds numbers in crustacean swimming. We then show that the organization of the neural circuit underlying swimmeret coordination provides a robust mechanism for generating this stroke pattern. Specifically, the wave-like limb coordination emerges robustly from a combination of the half-center structure of the local central pattern generating circuits (CPGs) that drive the movements of each limb, the asymmetric network topology of the connections between local CPGs, and the phase response properties of the local CPGs, which we measure experimentally. Thus, the crustacean swimmeret system serves as a concrete example in which the architecture of a neural circuit leads to optimal behavior in a robust manner. Furthermore, we consider all possible connection topologies between local CPGs and show that the natural connectivity pattern generates the biomechanically optimal stroke pattern most robustly. Given the high metabolic cost of crustacean swimming, our results suggest that natural selection has pushed the swimmeret neural circuit toward a connection topology that produces optimal behavior. PMID:25201976

  3. Flux-corrected transport algorithms for continuous Galerkin methods based on high order Bernstein finite elements

    NASA Astrophysics Data System (ADS)

    Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso

    2017-09-01

    This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.

  4. Communication: Near-locality of exchange and correlation density functionals for 1- and 2-electron systems

    NASA Astrophysics Data System (ADS)

    Sun, Jianwei; Perdew, John P.; Yang, Zenghui; Peng, Haowei

    2016-05-01

    The uniform electron gas and the hydrogen atom play fundamental roles in condensed matter physics and quantum chemistry. The former has an infinite number of electrons uniformly distributed over the neutralizing positively charged background, and the latter only one electron bound to the proton. The uniform electron gas was used to derive the local spin density approximation to the exchange-correlation functional that undergirds the development of the Kohn-Sham density functional theory. We show here that the ground-state exchange-correlation energies of the hydrogen atom and many other 1- and 2-electron systems are modeled surprisingly well by a different local spin density approximation (LSDA0). LSDA0 is constructed to satisfy exact constraints but agrees surprisingly well with the exact results for a uniform two-electron density in a finite, curved three-dimensional space. We also apply LSDA0 to excited or noded 1-electron densities, where it works less well. Furthermore, we show that the localization of the exact exchange hole for a 1- or 2-electron ground state can be measured by the ratio of the exact exchange energy to its optimal lower bound.

  5. The benefits of adaptive parametrization in multi-objective Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John

    2010-10-01

    In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).

  6. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Jianwei; Yang, Zenghui; Peng, Haowei

    The uniform electron gas and the hydrogen atom play fundamental roles in condensed matter physics and quantum chemistry. The former has an infinite number of electrons uniformly distributed over the neutralizing positively charged background, and the latter only one electron bound to the proton. The uniform electron gas was used to derive the local spin density approximation to the exchange-correlation functional that undergirds the development of the Kohn-Sham density functional theory. We show here that the ground-state exchange-correlation energies of the hydrogen atom and many other 1- and 2-electron systems are modeled surprisingly well by a different local spin densitymore » approximation (LSDA0). LSDA0 is constructed to satisfy exact constraints but agrees surprisingly well with the exact results for a uniform two-electron density in a finite, curved three-dimensional space. We also apply LSDA0 to excited or noded 1-electron densities, where it works less well. Furthermore, we show that the localization of the exact exchange hole for a 1- or 2-electron ground state can be measured by the ratio of the exact exchange energy to its optimal lower bound.« less

  8. Bounded-Degree Approximations of Stochastic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less

  9. A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.

    2004-01-01

    The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.

  10. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  11. Broken symmetry in a two-qubit quantum control landscape

    NASA Astrophysics Data System (ADS)

    Bukov, Marin; Day, Alexandre G. R.; Weinberg, Phillip; Polkovnikov, Anatoli; Mehta, Pankaj; Sels, Dries

    2018-05-01

    We analyze the physics of optimal protocols to prepare a target state with high fidelity in a symmetrically coupled two-qubit system. By varying the protocol duration, we find a discontinuous phase transition, which is characterized by a spontaneous breaking of a Z2 symmetry in the functional form of the optimal protocol, and occurs below the quantum speed limit. We study in detail this phase and demonstrate that even though high-fidelity protocols come degenerate with respect to their fidelity, they lead to final states of different entanglement entropy shared between the qubits. Consequently, while globally both optimal protocols are equally far away from the target state, one is locally closer than the other. An approximate variational mean-field theory which captures the physics of the different phases is developed.

  12. A distributed approach to the OPF problem

    NASA Astrophysics Data System (ADS)

    Erseghe, Tomaso

    2015-12-01

    This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamoureux, Louis-Philippe; Navez, Patrick; Cerf, Nicolas J.

    It is shown that any quantum operation that perfectly clones the entanglement of all maximally entangled qubit pairs cannot preserve separability. This 'entanglement no-cloning' principle naturally suggests that some approximate cloning of entanglement is nevertheless allowed by quantum mechanics. We investigate a separability-preserving optimal cloning machine that duplicates all maximally entangled states of two qubits, resulting in 0.285 bits of entanglement per clone, while a local cloning machine only yields 0.060 bits of entanglement per clone.

  14. Power law-based local search in spider monkey optimisation for lower order system modelling

    NASA Astrophysics Data System (ADS)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  15. Compressed modes for variational problems in mathematics and physics

    PubMed Central

    Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-01-01

    This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861

  16. Compressed modes for variational problems in mathematics and physics.

    PubMed

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-11-12

    This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.

  17. Extremal optimization for Sherrington-Kirkpatrick spin glasses

    NASA Astrophysics Data System (ADS)

    Boettcher, S.

    2005-08-01

    Extremal Optimization (EO), a new local search heuristic, is used to approximate ground states of the mean-field spin glass model introduced by Sherrington and Kirkpatrick. The implementation extends the applicability of EO to systems with highly connected variables. Approximate ground states of sufficient accuracy and with statistical significance are obtained for systems with more than N=1000 variables using ±J bonds. The data reproduces the well-known Parisi solution for the average ground state energy of the model to about 0.01%, providing a high degree of confidence in the heuristic. The results support to less than 1% accuracy rational values of ω=2/3 for the finite-size correction exponent, and of ρ=3/4 for the fluctuation exponent of the ground state energies, neither one of which has been obtained analytically yet. The probability density function for ground state energies is highly skewed and identical within numerical error to the one found for Gaussian bonds. But comparison with infinite-range models of finite connectivity shows that the skewness is connectivity-dependent.

  18. Course 4: Density Functional Theory, Methods, Techniques, and Applications

    NASA Astrophysics Data System (ADS)

    Chrétien, S.; Salahub, D. R.

    Contents 1 Introduction 2 Density functional theory 2.1 Hohenberg and Kohn theorems 2.2 Levy's constrained search 2.3 Kohn-Sham method 3 Density matrices and pair correlation functions 4 Adiabatic connection or coupling strength integration 5 Comparing and constrasting KS-DFT and HF-CI 6 Preparing new functionals 7 Approximate exchange and correlation functionals 7.1 The Local Spin Density Approximation (LSDA) 7.2 Gradient Expansion Approximation (GEA) 7.3 Generalized Gradient Approximation (GGA) 7.4 meta-Generalized Gradient Approximation (meta-GGA) 7.5 Hybrid functionals 7.6 The Optimized Effective Potential method (OEP) 7.7 Comparison between various approximate functionals 8 LAP correlation functional 9 Solving the Kohn-Sham equations 9.1 The Kohn-Sham orbitals 9.2 Coulomb potential 9.3 Exchange-correlation potential 9.4 Core potential 9.5 Other choices and sources of error 9.6 Functionality 10 Applications 10.1 Ab initio molecular dynamics for an alanine dipeptide model 10.2 Transition metal clusters: The ecstasy, and the agony... 10.3 The conversion of acetylene to benzene on Fe clusters 11 Conclusions

  19. Finite element approximation of an optimal control problem for the von Karman equations

    NASA Technical Reports Server (NTRS)

    Hou, L. Steven; Turner, James C.

    1994-01-01

    This paper is concerned with optimal control problems for the von Karman equations with distributed controls. We first show that optimal solutions exist. We then show that Lagrange multipliers may be used to enforce the constraints and derive an optimality system from which optimal states and controls may be deduced. Finally we define finite element approximations of solutions for the optimality system and derive error estimates for the approximations.

  20. Applications of hybrid genetic algorithms in seismic tomography

    NASA Astrophysics Data System (ADS)

    Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet T.; Papazachos, Constantinos

    2011-11-01

    Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems. In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.

  1. Ab Initio study on structural, electronic, magnetic and dielectric properties of LSNO within Density Functional Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Petersen, John; Bechstedt, Friedhelm; Furthmüller, Jürgen; Scolfaro, Luisa

    LSNO (La2-xSrxNiO4) is of great interest due to its colossal dielectric constant (CDC) and rich underlying physics. While being an antiferromagnetic insulator, localized holes are present in the form of stripes in the Ni-O planes which are commensurate with the inverse of the Sr concentration. The stripes are a manifestation of charge density waves with period approximately 1/x and spin density waves with period approximately 2/x. Here, the spin ground state is calculated via LSDA + U with the PAW method implemented in VASP. Crystal structure and the effective Hubbard U parameter are optimized before calculating ɛ∞ within the independent particle approximation. ɛ∞ and the full static dielectric constant (including the lattice polarizability) ɛ0 are calculated within Density Functional Perturbation Theory.

  2. Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions.

    PubMed

    Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu

    2017-11-01

    This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.

  3. The calculations of small molecular conformation energy differences by density functional method

    NASA Astrophysics Data System (ADS)

    Topol, I. A.; Burt, S. K.

    1993-03-01

    The differences in the conformational energies for the gauche (G) and trans(T) conformers of 1,2-difluoroethane and for myo-and scyllo-conformer of inositol have been calculated by local density functional method (LDF approximation) with geometry optimization using different sets of calculation parameters. It is shown that in the contrast to Hartree—Fock methods, density functional calculations reproduce the correct sign and value of the gauche effect for 1,2-difluoroethane and energy difference for both conformers of inositol. The results of normal vibrational analysis for1,2-difluoroethane showed that harmonic frequencies calculated in LDF approximation agree with experimental data with the accuracy typical for scaled large basis set Hartree—Fock calculations.

  4. Geodesic regression for image time-series.

    PubMed

    Niethammer, Marc; Huang, Yang; Vialard, François-Xavier

    2011-01-01

    Registration of image-time series has so far been accomplished (i) by concatenating registrations between image pairs, (ii) by solving a joint estimation problem resulting in piecewise geodesic paths between image pairs, (iii) by kernel based local averaging or (iv) by augmenting the joint estimation with additional temporal irregularity penalties. Here, we propose a generative model extending least squares linear regression to the space of images by using a second-order dynamic formulation for image registration. Unlike previous approaches, the formulation allows for a compact representation of an approximation to the full spatio-temporal trajectory through its initial values. The method also opens up possibilities to design image-based approximation algorithms. The resulting optimization problem is solved using an adjoint method.

  5. Structural optimization with approximate sensitivities

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

    1994-01-01

    Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

  6. Investigations of quantum heuristics for optimization

    NASA Astrophysics Data System (ADS)

    Rieffel, Eleanor; Hadfield, Stuart; Jiang, Zhang; Mandra, Salvatore; Venturelli, Davide; Wang, Zhihui

    We explore the design of quantum heuristics for optimization, focusing on the quantum approximate optimization algorithm, a metaheuristic developed by Farhi, Goldstone, and Gutmann. We develop specific instantiations of the of quantum approximate optimization algorithm for a variety of challenging combinatorial optimization problems. Through theoretical analyses and numeric investigations of select problems, we provide insight into parameter setting and Hamiltonian design for quantum approximate optimization algorithms and related quantum heuristics, and into their implementation on hardware realizable in the near term.

  7. An efficient and practical approach to obtain a better optimum solution for structural optimization

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Yu; Huang, Jyun-Hao

    2013-08-01

    For many structural optimization problems, it is hard or even impossible to find the global optimum solution owing to unaffordable computational cost. An alternative and practical way of thinking is thus proposed in this research to obtain an optimum design which may not be global but is better than most local optimum solutions that can be found by gradient-based search methods. The way to reach this goal is to find a smaller search space for gradient-based search methods. It is found in this research that data mining can accomplish this goal easily. The activities of classification, association and clustering in data mining are employed to reduce the original design space. For unconstrained optimization problems, the data mining activities are used to find a smaller search region which contains the global or better local solutions. For constrained optimization problems, it is used to find the feasible region or the feasible region with better objective values. Numerical examples show that the optimum solutions found in the reduced design space by sequential quadratic programming (SQP) are indeed much better than those found by SQP in the original design space. The optimum solutions found in a reduced space by SQP sometimes are even better than the solution found using a hybrid global search method with approximate structural analyses.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamzam, Ahmed, S.; Zhaoy, Changhong; Dall'Anesey, Emiliano

    This paper examines the AC Optimal Power Flow (OPF) problem for multiphase distribution networks featuring renewable energy resources (RESs). We start by outlining a power flow model for radial multiphase systems that accommodates wye-connected and delta-connected RESs and non-controllable energy assets. We then formalize an AC OPF problem that accounts for both types of connections. Similar to various AC OPF renditions, the resultant problem is a non convex quadratically-constrained quadratic program. However, the so-called Feasible Point Pursuit-Successive Convex Approximation algorithm is leveraged to obtain a feasible and yet locally-optimal solution. The merits of the proposed solution approach are demonstrated usingmore » two unbalanced multiphase distribution feeders with both wye and delta connections.« less

  9. Approximation of the Newton Step by a Defect Correction Process

    NASA Technical Reports Server (NTRS)

    Arian, E.; Batterman, A.; Sachs, E. W.

    1999-01-01

    In this paper, an optimal control problem governed by a partial differential equation is considered. The Newton step for this system can be computed by solving a coupled system of equations. To do this efficiently with an iterative defect correction process, a modifying operator is introduced into the system. This operator is motivated by local mode analysis. The operator can be used also for preconditioning in Generalized Minimum Residual (GMRES). We give a detailed convergence analysis for the defect correction process and show the derivation of the modifying operator. Numerical tests are done on the small disturbance shape optimization problem in two dimensions for the defect correction process and for GMRES.

  10. Finite dimensional approximation of a class of constrained nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, Max D.; Hou, L. S.

    1994-01-01

    An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.

  11. Influence of cost functions and optimization methods on solving the inverse problem in spatially resolved diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Rakotomanga, Prisca; Soussen, Charles; Blondel, Walter C. P. M.

    2017-03-01

    Diffuse reflectance spectroscopy (DRS) has been acknowledged as a valuable optical biopsy tool for in vivo characterizing pathological modifications in epithelial tissues such as cancer. In spatially resolved DRS, accurate and robust estimation of the optical parameters (OP) of biological tissues is a major challenge due to the complexity of the physical models. Solving this inverse problem requires to consider 3 components: the forward model, the cost function, and the optimization algorithm. This paper presents a comparative numerical study of the performances in estimating OP depending on the choice made for each of the latter components. Mono- and bi-layer tissue models are considered. Monowavelength (scalar) absorption and scattering coefficients are estimated. As a forward model, diffusion approximation analytical solutions with and without noise are implemented. Several cost functions are evaluated possibly including normalized data terms. Two local optimization methods, Levenberg-Marquardt and TrustRegion-Reflective, are considered. Because they may be sensitive to the initial setting, a global optimization approach is proposed to improve the estimation accuracy. This algorithm is based on repeated calls to the above-mentioned local methods, with initial parameters randomly sampled. Two global optimization methods, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are also implemented. Estimation performances are evaluated in terms of relative errors between the ground truth and the estimated values for each set of unknown OP. The combination between the number of variables to be estimated, the nature of the forward model, the cost function to be minimized and the optimization method are discussed.

  12. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  14. Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3

    NASA Astrophysics Data System (ADS)

    Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.

    2016-04-01

    Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.

  15. Derivative discontinuity and exchange-correlation potential of meta-GGAs in density-functional theory.

    PubMed

    Eich, F G; Hellgren, Maria

    2014-12-14

    We investigate fundamental properties of meta-generalized-gradient approximations (meta-GGAs) to the exchange-correlation energy functional, which have an implicit density dependence via the Kohn-Sham kinetic-energy density. To this purpose, we construct the most simple meta-GGA by expressing the local exchange-correlation energy per particle as a function of a fictitious density, which is obtained by inverting the Thomas-Fermi kinetic-energy functional. This simple functional considerably improves the total energy of atoms as compared to the standard local density approximation. The corresponding exchange-correlation potentials are then determined exactly through a solution of the optimized effective potential equation. These potentials support an additional bound state and exhibit a derivative discontinuity at integer particle numbers. We further demonstrate that through the kinetic-energy density any meta-GGA incorporates a derivative discontinuity. However, we also find that for commonly used meta-GGAs the discontinuity is largely underestimated and in some cases even negative.

  16. Landmark-based elastic registration using approximating thin-plate splines.

    PubMed

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  17. Derivative discontinuity and exchange-correlation potential of meta-GGAs in density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eich, F. G., E-mail: eichf@missouri.edu; Hellgren, Maria

    2014-12-14

    We investigate fundamental properties of meta-generalized-gradient approximations (meta-GGAs) to the exchange-correlation energy functional, which have an implicit density dependence via the Kohn-Sham kinetic-energy density. To this purpose, we construct the most simple meta-GGA by expressing the local exchange-correlation energy per particle as a function of a fictitious density, which is obtained by inverting the Thomas-Fermi kinetic-energy functional. This simple functional considerably improves the total energy of atoms as compared to the standard local density approximation. The corresponding exchange-correlation potentials are then determined exactly through a solution of the optimized effective potential equation. These potentials support an additional bound state andmore » exhibit a derivative discontinuity at integer particle numbers. We further demonstrate that through the kinetic-energy density any meta-GGA incorporates a derivative discontinuity. However, we also find that for commonly used meta-GGAs the discontinuity is largely underestimated and in some cases even negative.« less

  18. JIGSAW: Joint Inhomogeneity estimation via Global Segment Assembly for Water-fat separation.

    PubMed

    Lu, Wenmiao; Lu, Yi

    2011-07-01

    Water-fat separation in magnetic resonance imaging (MRI) is of great clinical importance, and the key to uniform water-fat separation lies in field map estimation. This work deals with three-point field map estimation, in which water and fat are modelled as two single-peak spectral lines, and field inhomogeneities shift the spectrum by an unknown amount. Due to the simplified spectrum modelling, there exists inherent ambiguity in forming field maps from multiple locally feasible field map values at each pixel. To resolve such ambiguity, spatial smoothness of field maps has been incorporated as a constraint of an optimization problem. However, there are two issues: the optimization problem is computationally intractable and even when it is solved exactly, it does not always separate water and fat images. Hence, robust field map estimation remains challenging in many clinically important imaging scenarios. This paper proposes a novel field map estimation technique called JIGSAW. It extends a loopy belief propagation (BP) algorithm to obtain an approximate solution to the optimization problem. The solution produces locally smooth segments and avoids error propagation associated with greedy methods. The locally smooth segments are then assembled into a globally consistent field map by exploiting the periodicity of the feasible field map values. In vivo results demonstrate that JIGSAW outperforms existing techniques and produces correct water-fat separation in challenging imaging scenarios.

  19. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  20. Genetic Algorithm Optimizes Q-LAW Control Parameters

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  1. Theoretical and experimental investigations of optical, structural and electronic properties of the lower-dimensional hybrid [NH3-(CH2)10-NH3]ZnCl4

    NASA Astrophysics Data System (ADS)

    El Mrabet, R.; Kassou, S.; Tahiri, O.; Belaaraj, A.; Guionneau, P.

    2016-10-01

    In the current study, a combination between theoretical and experimental studies has been made for the hybrid perovskite [NH3-(CH2)10-NH3]ZnCl4. The density functional theory (DFT) was performed to investigate structural and electronic properties of the tilted compound. A local approximation (LDA) and semi-local approach (GGA) were employed. The results are obtained using, respectively, the local exchange correlation functional of Perdew-Wang 92 (PW92) and semi local functional of Perdew-Burke-Ernzerhof (PBE). The optimized cell parameters are in good agreement with the experimental results. Electronic properties have been studied through the calculation of band structures and density of state (DOS), while structural properties are investigated by geometry optimization of the cell. Fritz-Haber-Institute (FHI) pseudopotentials were employed to perform all calculations. The optical diffuse reflectance spectra was mesured and applied to deduce the refractive index ( n), the extinction coefficient ( k), the absorption coefficient (α), the real and imaginary dielectric permittivity parts (ɛr,ɛi)) and the optical band gap energy Eg. The optical band gap energy value shows good consistent with that obtained from DFT calculations and reveals the insulating behavior of the material.

  2. Learning locality preserving graph from data.

    PubMed

    Zhang, Yan-Ming; Huang, Kaizhu; Hou, Xinwen; Liu, Cheng-Lin

    2014-11-01

    Machine learning based on graph representation, or manifold learning, has attracted great interest in recent years. As the discrete approximation of data manifold, the graph plays a crucial role in these kinds of learning approaches. In this paper, we propose a novel learning method for graph construction, which is distinct from previous methods in that it solves an optimization problem with the aim of directly preserving the local information of the original data set. We show that the proposed objective has close connections with the popular Laplacian Eigenmap problem, and is hence well justified. The optimization turns out to be a quadratic programming problem with n(n-1)/2 variables (n is the number of data points). Exploiting the sparsity of the graph, we further propose a more efficient cutting plane algorithm to solve the problem, making the method better scalable in practice. In the context of clustering and semi-supervised learning, we demonstrated the advantages of our proposed method by experiments.

  3. GALAXY: A new hybrid MOEA for the optimal design of Water Distribution Systems

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Savić, D. A.; Kapelan, Z.

    2017-03-01

    A new hybrid optimizer, called genetically adaptive leaping algorithm for approximation and diversity (GALAXY), is proposed for dealing with the discrete, combinatorial, multiobjective design of Water Distribution Systems (WDSs), which is NP-hard and computationally intensive. The merit of GALAXY is its ability to alleviate to a great extent the parameterization issue and the high computational overhead. It follows the generational framework of Multiobjective Evolutionary Algorithms (MOEAs) and includes six search operators and several important strategies. These operators are selected based on their leaping ability in the objective space from the global and local search perspectives. These strategies steer the optimization and balance the exploration and exploitation aspects simultaneously. A highlighted feature of GALAXY lies in the fact that it eliminates majority of parameters, thus being robust and easy-to-use. The comparative studies between GALAXY and three representative MOEAs on five benchmark WDS design problems confirm its competitiveness. GALAXY can identify better converged and distributed boundary solutions efficiently and consistently, indicating a much more balanced capability between the global and local search. Moreover, its advantages over other MOEAs become more substantial as the complexity of the design problem increases.

  4. Non-rigid Motion Correction in 3D Using Autofocusing with Localized Linear Translations

    PubMed Central

    Cheng, Joseph Y.; Alley, Marcus T.; Cunningham, Charles H.; Vasanawala, Shreyas S.; Pauly, John M.; Lustig, Michael

    2012-01-01

    MR scans are sensitive to motion effects due to the scan duration. To properly suppress artifacts from non-rigid body motion, complex models with elements such as translation, rotation, shear, and scaling have been incorporated into the reconstruction pipeline. However, these techniques are computationally intensive and difficult to implement for online reconstruction. On a sufficiently small spatial scale, the different types of motion can be well-approximated as simple linear translations. This formulation allows for a practical autofocusing algorithm that locally minimizes a given motion metric – more specifically, the proposed localized gradient-entropy metric. To reduce the vast search space for an optimal solution, possible motion paths are limited to the motion measured from multi-channel navigator data. The novel navigation strategy is based on the so-called “Butterfly” navigators which are modifications to the spin-warp sequence that provide intrinsic translational motion information with negligible overhead. With a 32-channel abdominal coil, sufficient number of motion measurements were found to approximate possible linear motion paths for every image voxel. The correction scheme was applied to free-breathing abdominal patient studies. In these scans, a reduction in artifacts from complex, non-rigid motion was observed. PMID:22307933

  5. Gedanken densities and exact constraints in density functional theory.

    PubMed

    Perdew, John P; Ruzsinszky, Adrienn; Sun, Jianwei; Burke, Kieron

    2014-05-14

    Approximations to the exact density functional for the exchange-correlation energy of a many-electron ground state can be constructed by satisfying constraints that are universal, i.e., valid for all electron densities. Gedanken densities are designed for the purpose of this construction, but need not be realistic. The uniform electron gas is an old gedanken density. Here, we propose a spherical two-electron gedanken density in which the dimensionless density gradient can be an arbitrary positive constant wherever the density is non-zero. The Lieb-Oxford lower bound on the exchange energy can be satisfied within a generalized gradient approximation (GGA) by bounding its enhancement factor or simplest GGA exchange-energy density. This enhancement-factor bound is well known to be sufficient, but our gedanken density shows that it is also necessary. The conventional exact exchange-energy density satisfies no such local bound, but energy densities are not unique, and the simplest GGA exchange-energy density is not an approximation to it. We further derive a strongly and optimally tightened bound on the exchange enhancement factor of a two-electron density, which is satisfied by the local density approximation but is violated by all published GGA's or meta-GGA's. Finally, some consequences of the non-uniform density-scaling behavior for the asymptotics of the exchange enhancement factor of a GGA or meta-GGA are given.

  6. The arbitrary order mixed mimetic finite difference method for the diffusion equation

    DOE PAGES

    Gyrya, Vitaliy; Lipnikov, Konstantin; Manzini, Gianmarco

    2016-05-01

    Here, we propose an arbitrary-order accurate mimetic finite difference (MFD) method for the approximation of diffusion problems in mixed form on unstructured polygonal and polyhedral meshes. As usual in the mimetic numerical technology, the method satisfies local consistency and stability conditions, which determines the accuracy and the well-posedness of the resulting approximation. The method also requires the definition of a high-order discrete divergence operator that is the discrete analog of the divergence operator and is acting on the degrees of freedom. The new family of mimetic methods is proved theoretically to be convergent and optimal error estimates for flux andmore » scalar variable are derived from the convergence analysis. A numerical experiment confirms the high-order accuracy of the method in solving diffusion problems with variable diffusion tensor. It is worth mentioning that the approximation of the scalar variable presents a superconvergence effect.« less

  7. Autonomous vehicle motion control, approximate maps, and fuzzy logic

    NASA Technical Reports Server (NTRS)

    Ruspini, Enrique H.

    1993-01-01

    Progress on research on the control of actions of autonomous mobile agents using fuzzy logic is presented. The innovations described encompass theoretical and applied developments. At the theoretical level, results of research leading to the combined utilization of conventional artificial planning techniques with fuzzy logic approaches for the control of local motion and perception actions are presented. Also formulations of dynamic programming approaches to optimal control in the context of the analysis of approximate models of the real world are examined. Also a new approach to goal conflict resolution that does not require specification of numerical values representing relative goal importance is reviewed. Applied developments include the introduction of the notion of approximate map. A fuzzy relational database structure for the representation of vague and imprecise information about the robot's environment is proposed. Also the central notions of control point and control structure are discussed.

  8. An efficient linear-scaling CCSD(T) method based on local natural orbitals.

    PubMed

    Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály

    2013-09-07

    An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)] and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)] with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Mo̸ller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.

  9. Configurable hardware integrate and fire neurons for sparse approximation.

    PubMed

    Shapero, Samuel; Rozell, Christopher; Hasler, Paul

    2013-09-01

    Sparse approximation is an important optimization problem in signal and image processing applications. A Hopfield-Network-like system of integrate and fire (IF) neurons is proposed as a solution, using the Locally Competitive Algorithm (LCA) to solve an overcomplete L1 sparse approximation problem. A scalable system architecture is described, including IF neurons with a nonlinear firing function, and current-based synapses to provide linear computation. A network of 18 neurons with 12 inputs is implemented on the RASP 2.9v chip, a Field Programmable Analog Array (FPAA) with directly programmable floating gate elements. Said system uses over 1400 floating gates, the largest system programmed on a FPAA to date. The circuit successfully reproduced the outputs of a digital optimization program, converging to within 4.8% RMS, and an objective cost only 1.7% higher on average. The active circuit consumed 559 μA of current at 2.4 V and converges on solutions in 25 μs, with measurement of the converged spike rate taking an additional 1 ms. Extrapolating the scaling trends to a N=1000 node system, the spiking LCA compares favorably with state-of-the-art digital solutions, and analog solutions using a non-spiking approach. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Nonlinear identification using a B-spline neural network and chaotic immune approaches

    NASA Astrophysics Data System (ADS)

    dos Santos Coelho, Leandro; Pessôa, Marcelo Wicthoff

    2009-11-01

    One of the important applications of B-spline neural network (BSNN) is to approximate nonlinear functions defined on a compact subset of a Euclidean space in a highly parallel manner. Recently, BSNN, a type of basis function neural network, has received increasing attention and has been applied in the field of nonlinear identification. BSNNs have the potential to "learn" the process model from input-output data or "learn" fault knowledge from past experience. BSNN can be used as function approximators to construct the analytical model for residual generation too. However, BSNN is trained by gradient-based methods that may fall into local minima during the learning procedure. When using feed-forward BSNNs, the quality of approximation depends on the control points (knots) placement of spline functions. This paper describes the application of a modified artificial immune network inspired optimization method - the opt-aiNet - combined with sequences generate by Hénon map to provide a stochastic search to adjust the control points of a BSNN. The numerical results presented here indicate that artificial immune network optimization methods are useful for building good BSNN model for the nonlinear identification of two case studies: (i) the benchmark of Box and Jenkins gas furnace, and (ii) an experimental ball-and-tube system.

  11. A Non-Local, Energy-Optimized Kernel: Recovering Second-Order Exchange and Beyond in Extended Systems

    NASA Astrophysics Data System (ADS)

    Bates, Jefferson; Laricchia, Savio; Ruzsinszky, Adrienn

    The Random Phase Approximation (RPA) is quickly becoming a standard method beyond semi-local Density Functional Theory that naturally incorporates weak interactions and eliminates self-interaction error. RPA is not perfect, however, and suffers from self-correlation error as well as an incorrect description of short-ranged correlation typically leading to underbinding. To improve upon RPA we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free for one and two electron systems in the high-density limit. By tuning the one free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy we obtain a non-local, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. To reduce the computational cost of the standard kernel-corrected RPA, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and non-metallic systems. Furthermore we stress that for norm-conserving implementations the accuracy of RPA and beyond RPA structural properties compared to experiment is inherently limited by the choice of pseudopotential. Current affiliation: King's College London.

  12. Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Adamian, A.

    1988-01-01

    An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.

  13. Double absorbing boundaries for finite-difference time-domain electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaGrone, John, E-mail: jlagrone@smu.edu; Hagstrom, Thomas, E-mail: thagstrom@smu.edu

    We describe the implementation of optimal local radiation boundary condition sequences for second order finite difference approximations to Maxwell's equations and the scalar wave equation using the double absorbing boundary formulation. Numerical experiments are presented which demonstrate that the design accuracy of the boundary conditions is achieved and, for comparable effort, exceeds that of a convolution perfectly matched layer with reasonably chosen parameters. An advantage of the proposed approach is that parameters can be chosen using an accurate a priori error bound.

  14. Linearly Adjustable International Portfolios

    NASA Astrophysics Data System (ADS)

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-09-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  15. Electronic structure calculation by nonlinear optimization: Application to metals

    NASA Astrophysics Data System (ADS)

    Benedek, R.; Min, B. I.; Woodward, C.; Garner, J.

    1988-04-01

    There is considerable interest in the development of novel algorithms for the calculation of electronic structure (e.g., at the level of the local-density approximation of density-functional theory). In this paper we consider a first-order equation-of-motion method. Two methods of solution are described, one proposed by Williams and Soler, and the other base on a Born-Dyson series expansion. The extension of the approach to metallic systems is outlined and preliminary numerical calculations for Zintl-phase NaTl are presented.

  16. Improvement of halophilic cellulase production from locally isolated fungal strain.

    PubMed

    Gunny, Ahmad Anas Nagoor; Arbain, Dachyar; Jamal, Parveen; Gumba, Rizo Edwin

    2015-07-01

    Halophilic cellulases from the newly isolated fungus, Aspergillus terreus UniMAP AA-6 were found to be useful for in situ saccharification of ionic liquids treated lignocelluloses. Efforts have been taken to improve the enzyme production through statistical optimization approach namely Plackett-Burman design and the Face Centered Central Composite Design (FCCCD). Plackett-Burman experimental design was used to screen the medium components and process conditions. It was found that carboxymethylcellulose (CMC), FeSO4·7H2O, NaCl, MgSO4·7H2O, peptone, agitation speed and inoculum size significantly influence the production of halophilic cellulase. On the other hand, KH2PO4, KOH, yeast extract and temperature had a negative effect on enzyme production. Further optimization through FCCCD revealed that the optimization approach improved halophilic cellulase production from 0.029 U/ml to 0.0625 U/ml, which was approximately 2.2-times greater than before optimization.

  17. Improvement of halophilic cellulase production from locally isolated fungal strain

    PubMed Central

    Gunny, Ahmad Anas Nagoor; Arbain, Dachyar; Jamal, Parveen; Gumba, Rizo Edwin

    2014-01-01

    Halophilic cellulases from the newly isolated fungus, Aspergillus terreus UniMAP AA-6 were found to be useful for in situ saccharification of ionic liquids treated lignocelluloses. Efforts have been taken to improve the enzyme production through statistical optimization approach namely Plackett–Burman design and the Face Centered Central Composite Design (FCCCD). Plackett–Burman experimental design was used to screen the medium components and process conditions. It was found that carboxymethylcellulose (CMC), FeSO4·7H2O, NaCl, MgSO4·7H2O, peptone, agitation speed and inoculum size significantly influence the production of halophilic cellulase. On the other hand, KH2PO4, KOH, yeast extract and temperature had a negative effect on enzyme production. Further optimization through FCCCD revealed that the optimization approach improved halophilic cellulase production from 0.029 U/ml to 0.0625 U/ml, which was approximately 2.2-times greater than before optimization. PMID:26150755

  18. Nano-SiC region formation in (100) Si-on-insulator substrate: Optimization of hot-C+-ion implantation process to improve photoluminescence intensity

    NASA Astrophysics Data System (ADS)

    Mizuno, Tomohisa; Omata, Yuhsuke; Kanazawa, Rikito; Iguchi, Yusuke; Nakada, Shinji; Aoki, Takashi; Sasaki, Tomokazu

    2018-04-01

    We experimentally studied the optimization of the hot-C+-ion implantation process for forming nano-SiC (silicon carbide) regions in a (100) Si-on-insulator substrate at various hot-C+-ion implantation temperatures and C+ ion doses to improve photoluminescence (PL) intensity for future Si-based photonic devices. We successfully optimized the process by hot-C+-ion implantation at a temperature of about 700 °C and a C+ ion dose of approximately 4 × 1016 cm-2 to realize a high intensity of PL emitted from an approximately 1.5-nm-thick C atom segregation layer near the surface-oxide/Si interface. Moreover, atom probe tomography showed that implanted C atoms cluster in the Si layer and near the oxide/Si interface; thus, the C content locally condenses even in the C atom segregation layer, which leads to SiC formation. Corrector-spherical aberration transmission electron microscopy also showed that both 4H-SiC and 3C-SiC nanoareas near both the surface-oxide/Si and buried-oxide/Si interfaces partially grow into the oxide layer, and the observed PL photons are mainly emitted from the surface SiC nano areas.

  19. Documentation for a Structural Optimization Procedure Developed Using the Engineering Analysis Language (EAL)

    NASA Technical Reports Server (NTRS)

    Martin, Carl J., Jr.

    1996-01-01

    This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.

  20. Generalized rules for the optimization of elastic network models

    NASA Astrophysics Data System (ADS)

    Lezon, Timothy; Eyal, Eran; Bahar, Ivet

    2009-03-01

    Elastic network models (ENMs) are widely employed for approximating the coarse-grained equilibrium dynamics of proteins using only a few parameters. An area of current focus is improving the predictive accuracy of ENMs by fine-tuning their force constants to fit specific systems. Here we introduce a set of general rules for assigning ENM force constants to residue pairs. Using a novel method, we construct ENMs that optimally reproduce experimental residue covariances from NMR models of 68 proteins. We analyze the optimal interactions in terms of amino acid types, pair distances and local protein structures to identify key factors in determining the effective spring constants. When applied to several unrelated globular proteins, our method shows an improved correlation with experiment over a standard ENM. We discuss the physical interpretation of our findings as well as its implications in the fields of protein folding and dynamics.

  1. Towards an Optimal Gradient-dependent Energy Functional of the PZ-SIC Form

    DOE PAGES

    Jónsson, Elvar Örn; Lehtola, Susi; Jónsson, Hannes

    2015-06-01

    Results of Perdew–Zunger self-interaction corrected (PZ-SIC) density functional theory calculations of the atomization energy of 35 molecules are compared to those of high-level quantum chemistry calculations. While the PBE functional, which is commonly used in calculations of condensed matter, is known to predict on average too high atomization energy (overbinding of the molecules), the application of PZ-SIC gives a large overcorrection and leads to significant underestimation of the atomization energy. The exchange enhancement factor that is optimal for the generalized gradient approximation within the Kohn-Sham (KS) approach may not be optimal for the self-interaction corrected functional. The PBEsol functional, wheremore » the exchange enhancement factor was optimized for solids, gives poor results for molecules in KS but turns out to work better than PBE in PZ-SIC calculations. The exchange enhancement is weaker in PBEsol and the functional is closer to the local density approximation. Furthermore, the drop in the exchange enhancement factor for increasing reduced gradient in the PW91 functional gives more accurate results than the plateaued enhancement in the PBE functional. A step towards an optimal exchange enhancement factor for a gradient dependent functional of the PZ-SIC form is taken by constructing an exchange enhancement factor that mimics PBEsol for small values of the reduced gradient, and PW91 for large values. The average atomization energy is then in closer agreement with the high-level quantum chemistry calculations, but the variance is still large, the F 2 molecule being a notable outlier.« less

  2. Semilocal density functional obeying a strongly tightened bound for exchange

    PubMed Central

    Sun, Jianwei; Perdew, John P.; Ruzsinszky, Adrienn

    2015-01-01

    Because of its useful accuracy and efficiency, density functional theory (DFT) is one of the most widely used electronic structure theories in physics, materials science, and chemistry. Only the exchange-correlation energy is unknown, and needs to be approximated in practice. Exact constraints provide useful information about this functional. The local spin-density approximation (LSDA) was the first constraint-based density functional. The Lieb–Oxford lower bound on the exchange-correlation energy for any density is another constraint that plays an important role in the development of generalized gradient approximations (GGAs) and meta-GGAs. Recently, a strongly and optimally tightened lower bound on the exchange energy was proved for one- and two-electron densities, and conjectured for all densities. In this article, we present a realistic “meta-GGA made very simple” (MGGA-MVS) for exchange that respects this optimal bound, which no previous beyond-LSDA approximation satisfies. This constraint might have been expected to worsen predicted thermochemical properties, but in fact they are improved over those of the Perdew–Burke–Ernzerhof GGA, which has nearly the same correlation part. MVS exchange is however radically different from that of other GGAs and meta-GGAs. Its exchange enhancement factor has a very strong dependence upon the orbital kinetic energy density, which permits accurate energies even with the drastically tightened bound. When this nonempirical MVS meta-GGA is hybridized with 25% of exact exchange, the resulting global hybrid gives excellent predictions for atomization energies, reaction barriers, and weak interactions of molecules. PMID:25561554

  3. Semilocal density functional obeying a strongly tightened bound for exchange.

    PubMed

    Sun, Jianwei; Perdew, John P; Ruzsinszky, Adrienn

    2015-01-20

    Because of its useful accuracy and efficiency, density functional theory (DFT) is one of the most widely used electronic structure theories in physics, materials science, and chemistry. Only the exchange-correlation energy is unknown, and needs to be approximated in practice. Exact constraints provide useful information about this functional. The local spin-density approximation (LSDA) was the first constraint-based density functional. The Lieb-Oxford lower bound on the exchange-correlation energy for any density is another constraint that plays an important role in the development of generalized gradient approximations (GGAs) and meta-GGAs. Recently, a strongly and optimally tightened lower bound on the exchange energy was proved for one- and two-electron densities, and conjectured for all densities. In this article, we present a realistic "meta-GGA made very simple" (MGGA-MVS) for exchange that respects this optimal bound, which no previous beyond-LSDA approximation satisfies. This constraint might have been expected to worsen predicted thermochemical properties, but in fact they are improved over those of the Perdew-Burke-Ernzerhof GGA, which has nearly the same correlation part. MVS exchange is however radically different from that of other GGAs and meta-GGAs. Its exchange enhancement factor has a very strong dependence upon the orbital kinetic energy density, which permits accurate energies even with the drastically tightened bound. When this nonempirical MVS meta-GGA is hybridized with 25% of exact exchange, the resulting global hybrid gives excellent predictions for atomization energies, reaction barriers, and weak interactions of molecules.

  4. Using Chou's pseudo amino acid composition based on approximate entropy and an ensemble of AdaBoost classifiers to predict protein subnuclear location.

    PubMed

    Jiang, Xiaoying; Wei, Rong; Zhao, Yanjun; Zhang, Tongliang

    2008-05-01

    The knowledge of subnuclear localization in eukaryotic cells is essential for understanding the life function of nucleus. Developing prediction methods and tools for proteins subnuclear localization become important research fields in protein science for special characteristics in cell nuclear. In this study, a novel approach has been proposed to predict protein subnuclear localization. Sample of protein is represented by Pseudo Amino Acid (PseAA) composition based on approximate entropy (ApEn) concept, which reflects the complexity of time series. A novel ensemble classifier is designed incorporating three AdaBoost classifiers. The base classifier algorithms in three AdaBoost are decision stumps, fuzzy K nearest neighbors classifier, and radial basis-support vector machines, respectively. Different PseAA compositions are used as input data of different AdaBoost classifier in ensemble. Genetic algorithm is used to optimize the dimension and weight factor of PseAA composition. Two datasets often used in published works are used to validate the performance of the proposed approach. The obtained results of Jackknife cross-validation test are higher and more balance than them of other methods on same datasets. The promising results indicate that the proposed approach is effective and practical. It might become a useful tool in protein subnuclear localization. The software in Matlab and supplementary materials are available freely by contacting the corresponding author.

  5. The effect of Fisher information matrix approximation methods in population optimal design calculations.

    PubMed

    Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C

    2016-12-01

    With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.

  6. Pair 2-electron reduced density matrix theory using localized orbitals

    NASA Astrophysics Data System (ADS)

    Head-Marsden, Kade; Mazziotti, David A.

    2017-08-01

    Full configuration interaction (FCI) restricted to a pairing space yields size-extensive correlation energies but its cost scales exponentially with molecular size. Restricting the variational two-electron reduced-density-matrix (2-RDM) method to represent the same pairing space yields an accurate lower bound to the pair FCI energy at a mean-field-like computational scaling of O (r3) where r is the number of orbitals. In this paper, we show that localized molecular orbitals can be employed to generate an efficient, approximately size-extensive pair 2-RDM method. The use of localized orbitals eliminates the substantial cost of optimizing iteratively the orbitals defining the pairing space without compromising accuracy. In contrast to the localized orbitals, the use of canonical Hartree-Fock molecular orbitals is shown to be both inaccurate and non-size-extensive. The pair 2-RDM has the flexibility to describe the spectra of one-electron RDM occupation numbers from all quantum states that are invariant to time-reversal symmetry. Applications are made to hydrogen chains and their dissociation, n-acene from naphthalene through octacene, and cadmium telluride 2-, 3-, and 4-unit polymers. For the hydrogen chains, the pair 2-RDM method recovers the majority of the energy obtained from similar calculations that iteratively optimize the orbitals. The localized-orbital pair 2-RDM method with its mean-field-like computational scaling and its ability to describe multi-reference correlation has important applications to a range of strongly correlated phenomena in chemistry and physics.

  7. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  8. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    NASA Astrophysics Data System (ADS)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. pp. 75-102, Springer Kern, S., N. Hansen and P. Koumoutsakos (2006). Local Meta-Models for Optimization Using Evolution Strategies. In Ninth International Conference on Parallel Problem Solving from Nature PPSN IX, Proceedings, pp.939-948, Berlin: Springer. Tahk, M., Woo, H., and Park. M, (2007). A hybrid optimization of evolutionary and gradient search. Engineering Optimization, (39), 87-104.

  9. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  10. Neural Network and Regression Approximations in High Speed Civil Transport Aircraft Design Optimization

    NASA Technical Reports Server (NTRS)

    Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    1998-01-01

    Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.

  11. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  12. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.

    PubMed

    Gorban, A N; Mirkes, E M; Zinovyev, A

    2016-12-01

    Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0

  13. Probing the localization of magnetic dichroism by atomic-size astigmatic and vortex electron beams.

    PubMed

    Negi, Devendra Singh; Idrobo, Juan Carlos; Rusz, Ján

    2018-03-05

    We report localization of a magnetic dichroic signal on atomic columns in electron magnetic circular dichroism (EMCD), probed by beam distorted by four-fold astigmatism and electron vortex beam. With astigmatic probe, magnetic signal to noise ratio can be enhanced by blocking the intensity from the central part of probe. However, the simulations show that for atomic resolution magnetic measurements, vortex beam is a more effective probe, with much higher magnetic signal to noise ratio. For all considered beam shapes, the optimal SNR constrains the signal detection at low collection angles of approximately 6-8 mrad. Irrespective of the material thickness, the magnetic signal remains strongly localized within the probed atomic column with vortex beam, whereas for astigmatic probes, the magnetic signal originates mostly from the nearest neighbor atomic columns. Due to excellent signal localization at probing individual atomic columns, vortex beams are predicted to be a strong candidate for studying the crystal site specific magnetic properties, magnetic properties at interfaces, or magnetism arising from individual atomic impurities.

  14. Multi-level adaptive finite element methods. 1: Variation problems

    NASA Technical Reports Server (NTRS)

    Brandt, A.

    1979-01-01

    A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.

  15. Two Point Exponential Approximation Method for structural optimization of problems with frequency constraints

    NASA Technical Reports Server (NTRS)

    Fadel, G. M.

    1991-01-01

    The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.

  16. Rational positive real approximations for LQG optimal compensators arising in active stabilization of flexible structures

    NASA Technical Reports Server (NTRS)

    Desantis, A.

    1994-01-01

    In this paper the approximation problem for a class of optimal compensators for flexible structures is considered. The particular case of a simply supported truss with an offset antenna is dealt with. The nonrational positive real optimal compensator transfer function is determined, and it is proposed that an approximation scheme based on a continued fraction expansion method be used. Comparison with the more popular modal expansion technique is performed in terms of stability margin and parameters sensitivity of the relative approximated closed loop transfer functions.

  17. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. A nanoporous gold membrane for sensing applications

    PubMed Central

    Oo, Swe Zin; Silva, Gloria; Carpignano, Francesca; Noual, Adnane; Pechstedt, Katrin; Mateos, Luis; Grant-Jacob, James A.; Brocklesby, Bill; Horak, Peter; Charlton, Martin; Boden, Stuart A.; Melvin, Tracy

    2016-01-01

    Design and fabrication of three-dimensionally structured, gold membranes containing hexagonally close-packed microcavities with nanopores in the base, are described. Our aim is to create a nanoporous structure with localized enhancement of the fluorescence or Raman scattering at, and in the nanopore when excited with light of approximately 600 nm, with a view to provide sensitive detection of biomolecules. A range of geometries of the nanopore integrated into hexagonally close-packed assemblies of gold micro-cavities was first evaluated theoretically. The optimal size and shape of the nanopore in a single microcavity were then considered to provide the highest localized plasmon enhancement (of fluorescence or Raman scattering) at the very center of the nanopore for a bioanalyte traversing through. The optimized design was established to be a 1200 nm diameter cavity of 600 nm depth with a 50 nm square nanopore with rounded corners in the base. A gold 3D-structured membrane containing these sized microcavities with the integrated nanopore was successfully fabricated and ‘proof of concept’ Raman scattering experiments are described. PMID:26973809

  19. Legendre-tau approximation for functional differential equations. Part 2: The linear quadratic optimal control problem

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1984-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  20. Legendre-tau approximation for functional differential equations. II - The linear quadratic optimal control problem

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi; Teglas, Russell

    1987-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  1. Nonlinear programming extensions to rational function approximation methods for unsteady aerodynamic forces

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1988-01-01

    The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.

  2. Optimal Budget Allocation for Sample Average Approximation

    DTIC Science & Technology

    2011-06-01

    an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample

  3. Optimized coordinates in vibrational coupled cluster calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomsen, Bo; Christiansen, Ove; Yagi, Kiyoshi

    The use of variationally optimized coordinates, which minimize the vibrational self-consistent field (VSCF) ground state energy with respect to orthogonal transformations of the coordinates, has recently been shown to improve the convergence of vibrational configuration interaction (VCI) towards the exact full VCI [K. Yagi, M. Keçeli, and S. Hirata, J. Chem. Phys. 137, 204118 (2012)]. The present paper proposes an incorporation of optimized coordinates into the vibrational coupled cluster (VCC), which has in the past been shown to outperform VCI in approximate calculations where similar restricted state spaces are employed in VCI and VCC. An embarrassingly parallel algorithm for variationalmore » optimization of coordinates for VSCF is implemented and the resulting coordinates and potentials are introduced into a VCC program. The performance of VCC in optimized coordinates (denoted oc-VCC) is examined through pilot applications to water, formaldehyde, and a series of water clusters (dimer, trimer, and hexamer) by comparing the calculated vibrational energy levels with those of the conventional VCC in normal coordinates and VCI in optimized coordinates. For water clusters, in particular, oc-VCC is found to gain orders of magnitude improvement in the accuracy, exemplifying that the combination of optimized coordinates localized to each monomer with the size-extensive VCC wave function provides a supreme description of systems consisting of weakly interacting sub-systems.« less

  4. Multiple-copy state discrimination: Thinking globally, acting locally

    NASA Astrophysics Data System (ADS)

    Higgins, B. L.; Doherty, A. C.; Bartlett, S. D.; Pryde, G. J.; Wiseman, H. M.

    2011-05-01

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N→∞. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.

  5. Multiple-copy state discrimination: Thinking globally, acting locally

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higgins, B. L.; Pryde, G. J.; Wiseman, H. M.

    2011-05-15

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N{yields}{infinity}. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements,more » and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.« less

  6. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  7. An efficient algorithm for building locally refined hp - adaptive H-PCFE: Application to uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-12-01

    Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.

  8. Many-Body Theory of Pyrochlore Iridates and Related Materials

    NASA Astrophysics Data System (ADS)

    Wang, Runzhi

    In this thesis we focus on two problems. First we propose a numerical method for generating optimized Wannier functions with desired properties. Second we perform the state of the art density functional plus dynamical mean-field calculations in pyrochlore iridates, to investigate the physics induced by the cooperation of spin-orbit coupling and electron correlation. We begin with the introduction for maximally localized Wannier functions and other related extensions. Then we describe the current research in the field of spin-orbit coupling and its interplay with correlation effects, followed by a brief introduction of the `hot' materials of iridates. Before the end of the introduction, we discuss the numerical methods employed in our work, including the density functional theory; dynamical mean-field theory and its combination with the exact diagonalization impurity solver. Then we propose our approach for constructing an optimized set of Wannier functions, which is a generalization of the functionality of the classic maximal localization method put forward by Marzari and Vanderbilt. Our work is motivated by the requirement of the effective description of the local subspace of the Hamiltonian by the beyond density functional theory methods. In extensions of density functional theory such as dynamical mean-field theory, one may want highly accurate description of particular local orbitals, including correct centers and symmetries; while the basis for the remaining degrees of freedom is unimportant. Therefore, we develop the selectively localized Wannier function approach which allows for a greater localization in the selected subset of Wannier functions and at the same time allows us to fix the centers and ensure the point symmetries. Applications in real materials are presented to demonstrate the power of our approach. Next we move to the investigation of pyrochlore iridates, focussing on the metal-insulator transition and material dependence in these compounds. We perform combined density functional plus dynamical mean-field calculations in Lu2Ir2O7, Y2Ir2O 7, Eu2Ir2O7, with spin-orbit coupling included and both single-site and cluster approximations appiled. A broad range of Weyl metal is predicted as the intervening phase in the metal-insulator transition. By comparing to experiments, we find that the single-site approximation fails to predict the gap values and substantial difference between the Y and Eu-compound, demonstrating the inadequacy of this approximation and indicating the key role played by the intersite effects. Finally, we provide a more accurate description of the vicinity of the metal-insulator and topological transitions implied by density functional plus cluster dynamical mean-field calculations of pyrochlore iridates. We find definitive evidence of the Weyl semimetal phase, the electronic structure of which can be approximately described as ``Weyl rings" with an extremely flat dispersion of one of the Weyl bands. This Weyl semimetal phase is further investigated by the k • p analysis fitting to the numerical results. We find that this unusual structure leads to interesting behavior in the optical conductivity including a Hall effect in the interband component, and to an enhanced susceptibility.

  9. Convergence and rate analysis of neural networks for sparse approximation.

    PubMed

    Balavoine, Aurèle; Romberg, Justin; Rozell, Christopher J

    2012-09-01

    We present an analysis of the Locally Competitive Algorithm (LCA), which is a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few nonzero coefficients). This class of problems plays a significant role in both theories of neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties, and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture. We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time. Furthermore, some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate. We support our analysis with several illustrative simulations.

  10. Efficient determination of the uncertainty for the optimization of SPECT system design: a subsampled fisher information matrix.

    PubMed

    Fuin, Niccolo; Pedemonte, Stefano; Arridge, Simon; Ourselin, Sebastien; Hutton, Brian F

    2014-03-01

    System designs in single photon emission tomography (SPECT) can be evaluated based on the fundamental trade-off between bias and variance that can be achieved in the reconstruction of emission tomograms. This trade off can be derived analytically using the Cramer-Rao type bounds, which imply the calculation and the inversion of the Fisher information matrix (FIM). The inverse of the FIM expresses the uncertainty associated to the tomogram, enabling the comparison of system designs. However, computing, storing and inverting the FIM is not practical with 3-D imaging systems. In order to tackle the problem of the computational load in calculating the inverse of the FIM, a method based on the calculation of the local impulse response and the variance, in a single point, from a single row of the FIM, has been previously proposed for system design. However this approximation (circulant approximation) does not capture the global interdependence between the variables in shift-variant systems such as SPECT, and cannot account e.g., for data truncation or missing data. Our new formulation relies on subsampling the FIM. The FIM is calculated over a subset of voxels arranged in a grid that covers the whole volume. Every element of the FIM at the grid points is calculated exactly, accounting for the acquisition geometry and for the object. This new formulation reduces the computational complexity in estimating the uncertainty, but nevertheless accounts for the global interdependence between the variables, enabling the exploration of design spaces hindered by the circulant approximation. The graphics processing unit accelerated implementation of the algorithm reduces further the computation times, making the algorithm a good candidate for real-time optimization of adaptive imaging systems. This paper describes the subsampled FIM formulation and implementation details. The advantages and limitations of the new approximation are explored, in comparison with the circulant approximation, in the context of design optimization of a parallel-hole collimator SPECT system and of an adaptive imaging system (similar to the commercially available D-SPECT).

  11. Multidisciplinary Optimization and Damage Tolerance of Stiffened Structures

    NASA Astrophysics Data System (ADS)

    Jrad, Mohamed

    THE structural optimization of a cantilever aircraft wing with curvilinear spars and ribs and stiffeners is described. For the optimization of a complex wing, a common strategy is to divide the optimization procedure into two subsystems: the global wing optimization which optimizes the geometry of spars, ribs and wing skins; and the local panel optimization which optimizes the design variables of local panels bordered by spars and ribs. The stiffeners are placed on the local panels to increase the stiffness and buckling resistance. During the local panel optimization, the stress information is taken from the global model as a displacement boundary condition on the panel edges using the so-called "Global-Local Approach". Particle swarm optimization is used in the integration of global/local optimization to optimize the SpaRibs. Parallel computing approach has been developed in the Python programming language to reduce the CPU time. The license cycle-check method and memory self-adjustment method are two approaches that have been applied in the parallel framework in order to optimize the use of the resources by reducing the license and memory limitations and making the code robust. The integrated global-local optimization approach has been applied to subsonic NASA common research model (CRM) wing, which proves the methodology's application scaling with medium fidelity FEM analysis. The structural weight of the wing has been reduced by 42% and the parallel implementation allowed a reduction in the CPU time by 89%. The aforementioned Global-Local Approach is investigated and applied to a composite panel with crack at its center. Because of composite laminates' heterogeneity, an accurate analysis of these requires very high time and storage space. A possible alternative to reduce the computational complexity is the global-local analysis which involves an approximate analysis of the whole structure followed by a detailed analysis of a significantly smaller region of interest. Buckling analysis of a composite panel with attached longitudinal stiffeners under compressive loads is performed using Ritz method with trigonometric functions. Results are then compared to those from Abaqus FEA for different shell elements. The case of composite panel with one, two, and three stiffeners is investigated. The effect of the distance between the stiffeners on the buckling load is also studied. The variation of the buckling load and buckling modes with the stiffeners' height is investigated. It is shown that there is an optimum value of stiffeners' height beyond which the structural response of the stiffened panel is not improved and the buckling load does not increase. Furthermore, there exist different critical values of stiffener's height at which the buckling mode of the structure changes. Next, buckling analysis of a composite panel with two straight stiffeners and a crack at the center is performed. Finally, buckling analysis of a composite panel with curvilinear stiffeners and a crack at the center is also conducted. Results show that panels with a larger crack have a reduced buckling load and that the buckling load decreases slightly when using higher order 2D shell FEM elements. A damage tolerance framework, EBF3PanelOpt, has been developed to design and analyze curvilinearly stiffened panels. The framework is written with the scripting language Python and it interacts with the commercial software MSC. Patran (for geometry and mesh creation), MSC. Nastran (for finite element analysis), and MSC. Marc (for damage tolerance analysis). The crack location is set to the location of the maximum value of the major principal stress while its orientation is set normal to the major principal axis direction. The effective stress intensity factor is calculated using the Virtual Crack Closure Technique and compared to the fracture toughness of the material in order to decide whether the crack will expand or not. The ratio of these two quantities is used as a constraint, along with the buckling factor, Kreisselmeier and Steinhauser criteria, and crippling factor. The EBF3PanelOpt framework is integrated within a two-step Particle Swarm Optimization in order to minimize the weight of the panel while satisfying the aforementioned constraints and using all the shape and thickness parameters as design variables. The result of the PSO is used then as an initial guess for the Gradient Based Optimization using only the thickness parameters as design variables and employing VisualDOC. Stiffened panel with two curvilinear stiffeners is optimized for two load cases. In both cases, significant reduction has been made for the panel's weight.

  12. A rotor optimization using regression analysis

    NASA Technical Reports Server (NTRS)

    Giansante, N.

    1984-01-01

    The design and development of helicopter rotors is subject to the many design variables and their interactions that effect rotor operation. Until recently, selection of rotor design variables to achieve specified rotor operational qualities has been a costly, time consuming, repetitive task. For the past several years, Kaman Aerospace Corporation has successfully applied multiple linear regression analysis, coupled with optimization and sensitivity procedures, in the analytical design of rotor systems. It is concluded that approximating equations can be developed rapidly for a multiplicity of objective and constraint functions and optimizations can be performed in a rapid and cost effective manner; the number and/or range of design variables can be increased by expanding the data base and developing approximating functions to reflect the expanded design space; the order of the approximating equations can be expanded easily to improve correlation between analyzer results and the approximating equations; gradients of the approximating equations can be calculated easily and these gradients are smooth functions reducing the risk of numerical problems in the optimization; the use of approximating functions allows the problem to be started easily and rapidly from various initial designs to enhance the probability of finding a global optimum; and the approximating equations are independent of the analysis or optimization codes used.

  13. Generation of optimal artificial neural networks using a pattern search algorithm: application to approximation of chemical systems.

    PubMed

    Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz

    2008-02-01

    A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.

  14. Use of spent mushroom substrate for production of Bacillus thuringiensis by solid-state fermentation.

    PubMed

    Wu, Songqing; Lan, Yanjiao; Huang, Dongmei; Peng, Yan; Huang, Zhipeng; Xu, Lei; Gelbic, Ivan; Carballar-Lejarazu, Rebeca; Guan, Xiong; Zhang, Lingling; Zou, Shuangquan

    2014-02-01

    The aim of this study was to explore a cost-effective method for the mass production of Bacillus thuringiensis (Bt) by solid-state fermentation. As a locally available agroindustrial byproduct, spent mushroom substrate (SMS) was used as raw material for Bt cultivation, and four combinations of SMS-based media were designed. Fermentation conditions were optimized on the best medium and the optimal conditions were determined as follows: temperature 32 degrees C, initial pH value 6, moisture content 50%, the ratio of sieved material to initial material 1:3, and inoculum volume 0.5 ml. Large scale production of B. thuringiensis subsp. israelensis (Bti) LLP29 was conducted on the optimal medium at optimal conditions. High toxicity (1,487 international toxic units/milligram) and long larvicidal persistence of the product were observed in the study, which illustrated that SMS-based solid-state fermentation medium was efficient and economical for large scale industrial production of Bt-based biopesticides. The cost of production of 1 kg of Bt was approximately US$0.075.

  15. Effective Clipart Image Vectorization through Direct Optimization of Bezigons.

    PubMed

    Yang, Ming; Chao, Hongyang; Zhang, Chi; Guo, Jun; Yuan, Lu; Sun, Jian

    2016-02-01

    Bezigons, i.e., closed paths composed of Bézier curves, have been widely employed to describe shapes in image vectorization results. However, most existing vectorization techniques infer the bezigons by simply approximating an intermediate vector representation (such as polygons). Consequently, the resultant bezigons are sometimes imperfect due to accumulated errors, fitting ambiguities, and a lack of curve priors, especially for low-resolution images. In this paper, we describe a novel method for vectorizing clipart images. In contrast to previous methods, we directly optimize the bezigons rather than using other intermediate representations; therefore, the resultant bezigons are not only of higher fidelity compared with the original raster image but also more reasonable because they were traced by a proficient expert. To enable such optimization, we have overcome several challenges and have devised a differentiable data energy as well as several curve-based prior terms. To improve the efficiency of the optimization, we also take advantage of the local control property of bezigons and adopt an overlapped piecewise optimization strategy. The experimental results show that our method outperforms both the current state-of-the-art method and commonly used commercial software in terms of bezigon quality.

  16. Robust hashing with local models for approximate similarity search.

    PubMed

    Song, Jingkuan; Yang, Yi; Li, Xuelong; Huang, Zi; Yang, Yang

    2014-07-01

    Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing l2,1 -norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.

  17. Robust Optimization Design for Turbine Blade-Tip Radial Running Clearance using Hierarchically Response Surface Method

    NASA Astrophysics Data System (ADS)

    Zhiying, Chen; Ping, Zhou

    2017-11-01

    Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.

  18. Approximate dynamic programming for optimal stationary control with control-dependent noise.

    PubMed

    Jiang, Yu; Jiang, Zhong-Ping

    2011-12-01

    This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.

  19. Online optimal obstacle avoidance for rotary-wing autonomous unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kang, Keeryun

    This thesis presents an integrated framework for online obstacle avoidance of rotary-wing unmanned aerial vehicles (UAVs), which can provide UAVs an obstacle field navigation capability in a partially or completely unknown obstacle-rich environment. The framework is composed of a LIDAR interface, a local obstacle grid generation, a receding horizon (RH) trajectory optimizer, a global shortest path search algorithm, and a climb rate limit detection logic. The key feature of the framework is the use of an optimization-based trajectory generation in which the obstacle avoidance problem is formulated as a nonlinear trajectory optimization problem with state and input constraints over the finite range of the sensor. This local trajectory optimization is combined with a global path search algorithm which provides a useful initial guess to the nonlinear optimization solver. Optimization is the natural process of finding the best trajectory that is dynamically feasible, safe within the vehicle's flight envelope, and collision-free at the same time. The optimal trajectory is continuously updated in real time by the numerical optimization solver, Nonlinear Trajectory Generation (NTG), which is a direct solver based on the spline approximation of trajectory for dynamically flat systems. In fact, the overall approach of this thesis to finding the optimal trajectory is similar to the model predictive control (MPC) or the receding horizon control (RHC), except that this thesis followed a two-layer design; thus, the optimal solution works as a guidance command to be followed by the controller of the vehicle. The framework is implemented in a real-time simulation environment, the Georgia Tech UAV Simulation Tool (GUST), and integrated in the onboard software of the rotary-wing UAV test-bed at Georgia Tech. Initially, the 2D vertical avoidance capability of real obstacles was tested in flight. The flight test evaluations were extended to the benchmark tests for 3D avoidance capability over the virtual obstacles, and finally it was demonstrated on real obstacles located at the McKenna MOUT site in Fort Benning, Georgia. Simulations and flight test evaluations demonstrate the feasibility of the developed framework for UAV applications involving low-altitude flight in an urban area.

  20. A Rigorous Framework for Optimization of Expensive Functions by Surrogates

    NASA Technical Reports Server (NTRS)

    Booker, Andrew J.; Dennis, J. E., Jr.; Frank, Paul D.; Serafini, David B.; Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which design application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approximations to the objective function and managing the use of these approximations as surrogates for optimization. The result is to obtain convergence to a minimizer of an expensive objective function subject to simple constraints. The approach is widely applicable because it does not require, or even explicitly approximate, derivatives of the objective. Numerical results are presented for a 31-variable helicopter rotor blade design example and for a standard optimization test example.

  1. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  2. On the validity of the use of a localized approximation for helical beams. I. Formal aspects

    NASA Astrophysics Data System (ADS)

    Gouesbet, Gérard; André Ambrosio, Leonardo

    2018-03-01

    The description of an electromagnetic beam for use in light scattering theories may be carried out by using an expansion over vector spherical wave functions with expansion coefficients expressed in terms of Beam Shape Coefficients (BSCs). A celebrated method to evaluate these BSCs has been the use of localized approximations (with several existing variants). We recently established that the use of any existing localized approximation is of limited validity in the case of Bessel and Mathieu beams. In the present paper, we address a warning against the use of any existing localized approximation in the case of helical beams. More specifically, we demonstrate that a procedure used to validate any existing localized approximation fails in the case of helical beams. Numerical computations in a companion paper will confirm that existing localized approximations are of limited validity in the case of helical beams.

  3. Optimization of PZT ceramic IDT sensors for health monitoring of structures.

    PubMed

    Takpara, Rafatou; Duquennoy, Marc; Ouaftouh, Mohammadi; Courtois, Christian; Jenot, Frédéric; Rguiti, Mohamed

    2017-08-01

    Surface acoustic waves (SAW) are particularly suited to effectively monitoring and characterizing structural surfaces (condition of the surface, coating, thin layer, micro-cracks…) as their energy is localized on the surface, within approximately one wavelength. Conventionally, in non-destructive testing, wedge sensors are used to the generation guided waves but they are especially suited to flat surfaces and sized for a given type material (angle of refraction). Additionally, these sensors are quite expensive so it is quite difficult to leave the sensors permanently on the structure for its health monitoring. Therefore we are considering in this study, another type of ultrasonic sensors, able to generate SAW. These sensors are interdigital sensors or IDT sensors for InterDigital Transducer. This paper focuses on optimization of IDT sensors for non-destructive structural testing by using PZT ceramics. The challenge was to optimize the dimensional parameters of the IDT sensors in order to efficiently generate surface waves. Acoustic tests then confirmed these parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  5. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  6. Optimal Keno Strategies and the Central Limit Theorem

    ERIC Educational Resources Information Center

    Johnson, Roger W.

    2006-01-01

    For the casino game Keno we determine optimal playing strategies. To decide such optimal strategies, both exact (hypergeometric) and approximate probability calculations are used. The approximate calculations are obtained via the Central Limit Theorem and simulation, and an important lesson about the application of the Central Limit Theorem is…

  7. Multi-objective design optimization of antenna structures using sequential domain patching with automated patch size determination

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2018-02-01

    In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.

  8. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  9. Task-driven optimization of CT tube current modulation and regularization in model-based iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2017-06-01

    Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.

  10. Spacecraft attitude control using neuro-fuzzy approximation of the optimal controllers

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Woo; Park, Sang-Young; Park, Chandeok

    2016-01-01

    In this study, a neuro-fuzzy controller (NFC) was developed for spacecraft attitude control to mitigate large computational load of the state-dependent Riccati equation (SDRE) controller. The NFC was developed by training a neuro-fuzzy network to approximate the SDRE controller. The stability of the NFC was numerically verified using a Lyapunov-based method, and the performance of the controller was analyzed in terms of approximation ability, steady-state error, cost, and execution time. The simulations and test results indicate that the developed NFC efficiently approximates the SDRE controller, with asymptotic stability in a bounded region of angular velocity encompassing the operational range of rapid-attitude maneuvers. In addition, it was shown that an approximated optimal feedback controller can be designed successfully through neuro-fuzzy approximation of the optimal open-loop controller.

  11. Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations

    NASA Astrophysics Data System (ADS)

    Mansfield, Christopher M.; Shoemaker, Christine A.

    1999-05-01

    This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.

  12. Locality of correlation in density functional theory.

    PubMed

    Burke, Kieron; Cancio, Antonio; Gould, Tim; Pittalis, Stefano

    2016-08-07

    The Hohenberg-Kohn density functional was long ago shown to reduce to the Thomas-Fermi (TF) approximation in the non-relativistic semiclassical (or large-Z) limit for all matter, i.e., the kinetic energy becomes local. Exchange also becomes local in this limit. Numerical data on the correlation energy of atoms support the conjecture that this is also true for correlation, but much less relevant to atoms. We illustrate how expansions around a large particle number are equivalent to local density approximations and their strong relevance to density functional approximations. Analyzing highly accurate atomic correlation energies, we show that EC → -AC ZlnZ + BCZ as Z → ∞, where Z is the atomic number, AC is known, and we estimate BC to be about 37 mhartree. The local density approximation yields AC exactly, but a very incorrect value for BC, showing that the local approximation is less relevant for the correlation alone. This limit is a benchmark for the non-empirical construction of density functional approximations. We conjecture that, beyond atoms, the leading correction to the local density approximation in the large-Z limit generally takes this form, but with BC a functional of the TF density for the system. The implications for the construction of approximate density functionals are discussed.

  13. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1982-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.

  14. Learning to assign binary weights to binary descriptor

    NASA Astrophysics Data System (ADS)

    Huang, Zhoudi; Wei, Zhenzhong; Zhang, Guangjun

    2016-10-01

    Constructing robust binary local feature descriptors are receiving increasing interest due to their binary nature, which can enable fast processing while requiring significantly less memory than their floating-point competitors. To bridge the performance gap between the binary and floating-point descriptors without increasing the computational cost of computing and matching, optimal binary weights are learning to assign to binary descriptor for considering each bit might contribute differently to the distinctiveness and robustness. Technically, a large-scale regularized optimization method is applied to learn float weights for each bit of the binary descriptor. Furthermore, binary approximation for the float weights is performed by utilizing an efficient alternatively greedy strategy, which can significantly improve the discriminative power while preserve fast matching advantage. Extensive experimental results on two challenging datasets (Brown dataset and Oxford dataset) demonstrate the effectiveness and efficiency of the proposed method.

  15. Spectral risk measures: the risk quadrangle and optimal approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouri, Drew P.

    We develop a general risk quadrangle that gives rise to a large class of spectral risk measures. The statistic of this new risk quadrangle is the average value-at-risk at a specific confidence level. As such, this risk quadrangle generates a continuum of error measures that can be used for superquantile regression. For risk-averse optimization, we introduce an optimal approximation of spectral risk measures using quadrature. Lastly, we prove the consistency of this approximation and demonstrate our results through numerical examples.

  16. Spectral risk measures: the risk quadrangle and optimal approximation

    DOE PAGES

    Kouri, Drew P.

    2018-05-24

    We develop a general risk quadrangle that gives rise to a large class of spectral risk measures. The statistic of this new risk quadrangle is the average value-at-risk at a specific confidence level. As such, this risk quadrangle generates a continuum of error measures that can be used for superquantile regression. For risk-averse optimization, we introduce an optimal approximation of spectral risk measures using quadrature. Lastly, we prove the consistency of this approximation and demonstrate our results through numerical examples.

  17. Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions

    PubMed Central

    Onufriev, Alexey V.

    2013-01-01

    We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790

  18. An approximation method for configuration optimization of trusses

    NASA Technical Reports Server (NTRS)

    Hansen, Scott R.; Vanderplaats, Garret N.

    1988-01-01

    Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.

  19. Robust optimization for nonlinear time-delay dynamical system of dha regulon with cost sensitivity constraint in batch culture

    NASA Astrophysics Data System (ADS)

    Yuan, Jinlong; Zhang, Xu; Liu, Chongyang; Chang, Liang; Xie, Jun; Feng, Enmin; Yin, Hongchao; Xiu, Zhilong

    2016-09-01

    Time-delay dynamical systems, which depend on both the current state of the system and the state at delayed times, have been an active area of research in many real-world applications. In this paper, we consider a nonlinear time-delay dynamical system of dha-regulonwith unknown time-delays in batch culture of glycerol bioconversion to 1,3-propanediol induced by Klebsiella pneumonia. Some important properties and strong positive invariance are discussed. Because of the difficulty in accurately measuring the concentrations of intracellular substances and the absence of equilibrium points for the time-delay system, a quantitative biological robustness for the concentrations of intracellular substances is defined by penalizing a weighted sum of the expectation and variance of the relative deviation between system outputs before and after the time-delays are perturbed. Our goal is to determine optimal values of the time-delays. To this end, we formulate an optimization problem in which the time delays are decision variables and the cost function is to minimize the biological robustness. This optimization problem is subject to the time-delay system, parameter constraints, continuous state inequality constraints for ensuring that the concentrations of extracellular and intracellular substances lie within specified limits, a quality constraint to reflect operational requirements and a cost sensitivity constraint for ensuring that an acceptable level of the system performance is achieved. It is approximated as a sequence of nonlinear programming sub-problems through the application of constraint transcription and local smoothing approximation techniques. Due to the highly complex nature of this optimization problem, the computational cost is high. Thus, a parallel algorithm is proposed to solve these nonlinear programming sub-problems based on the filled function method. Finally, it is observed that the obtained optimal estimates for the time-delays are highly satisfactory via numerical simulations.

  20. Algorithms for optimizing cross-overs in DNA shuffling.

    PubMed

    He, Lu; Friedman, Alan M; Bailey-Kellogg, Chris

    2012-03-21

    DNA shuffling generates combinatorial libraries of chimeric genes by stochastically recombining parent genes. The resulting libraries are subjected to large-scale genetic selection or screening to identify those chimeras with favorable properties (e.g., enhanced stability or enzymatic activity). While DNA shuffling has been applied quite successfully, it is limited by its homology-dependent, stochastic nature. Consequently, it is used only with parents of sufficient overall sequence identity, and provides no control over the resulting chimeric library. This paper presents efficient methods to extend the scope of DNA shuffling to handle significantly more diverse parents and to generate more predictable, optimized libraries. Our CODNS (cross-over optimization for DNA shuffling) approach employs polynomial-time dynamic programming algorithms to select codons for the parental amino acids, allowing for zero or a fixed number of conservative substitutions. We first present efficient algorithms to optimize the local sequence identity or the nearest-neighbor approximation of the change in free energy upon annealing, objectives that were previously optimized by computationally-expensive integer programming methods. We then present efficient algorithms for more powerful objectives that seek to localize and enhance the frequency of recombination by producing "runs" of common nucleotides either overall or according to the sequence diversity of the resulting chimeras. We demonstrate the effectiveness of CODNS in choosing codons and allocating substitutions to promote recombination between parents targeted in earlier studies: two GAR transformylases (41% amino acid sequence identity), two very distantly related DNA polymerases, Pol X and β (15%), and beta-lactamases of varying identity (26-47%). Our methods provide the protein engineer with a new approach to DNA shuffling that supports substantially more diverse parents, is more deterministic, and generates more predictable and more diverse chimeric libraries.

  1. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty.

    PubMed

    Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E

    2015-09-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.

  2. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty

    PubMed Central

    Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.

    2015-01-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275

  3. Optimal perturbations for nonlinear systems using graph-based optimal transport

    NASA Astrophysics Data System (ADS)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  4. Optimal and Scalable Caching for 5G Using Reinforcement Learning of Space-Time Popularities

    NASA Astrophysics Data System (ADS)

    Sadeghi, Alireza; Sheikholeslami, Fatemeh; Giannakis, Georgios B.

    2018-02-01

    Small basestations (SBs) equipped with caching units have potential to handle the unprecedented demand growth in heterogeneous networks. Through low-rate, backhaul connections with the backbone, SBs can prefetch popular files during off-peak traffic hours, and service them to the edge at peak periods. To intelligently prefetch, each SB must learn what and when to cache, while taking into account SB memory limitations, the massive number of available contents, the unknown popularity profiles, as well as the space-time popularity dynamics of user file requests. In this work, local and global Markov processes model user requests, and a reinforcement learning (RL) framework is put forth for finding the optimal caching policy when the transition probabilities involved are unknown. Joint consideration of global and local popularity demands along with cache-refreshing costs allow for a simple, yet practical asynchronous caching approach. The novel RL-based caching relies on a Q-learning algorithm to implement the optimal policy in an online fashion, thus enabling the cache control unit at the SB to learn, track, and possibly adapt to the underlying dynamics. To endow the algorithm with scalability, a linear function approximation of the proposed Q-learning scheme is introduced, offering faster convergence as well as reduced complexity and memory requirements. Numerical tests corroborate the merits of the proposed approach in various realistic settings.

  5. L^1 -optimality conditions for the circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Chen, Zheng

    2016-11-01

    In this paper, the L^1 -minimization for the translational motion of a spacecraft in the circular restricted three-body problem (CRTBP) is considered. Necessary conditions are derived by using the Pontryagin Maximum Principle (PMP), revealing the existence of bang-bang and singular controls. Singular extremals are analyzed, recalling the existence of the Fuller phenomenon according to the theories developed in (Marchal in J Optim Theory Appl 11(5):441-486, 1973; Zelikin and Borisov in Theory of Chattering Control with Applications to Astronautics, Robotics, Economics, and Engineering. Birkhäuser, Basal 1994; in J Math Sci 114(3):1227-1344, 2003). The sufficient optimality conditions for the L^1 -minimization problem with fixed endpoints have been developed in (Chen et al. in SIAM J Control Optim 54(3):1245-1265, 2016). In the current paper, we establish second-order conditions for optimal control problems with more general final conditions defined by a smooth submanifold target. In addition, the numerical implementation to check these optimality conditions is given. Finally, approximating the Earth-Moon-Spacecraft system by the CRTBP, an L^1 -minimization trajectory for the translational motion of a spacecraft is computed by combining a shooting method with a continuation method in (Caillau et al. in Celest Mech Dyn Astron 114:137-150, 2012; Caillau and Daoud in SIAM J Control Optim 50(6):3178-3202, 2012). The local optimality of the computed trajectory is asserted thanks to the second-order optimality conditions developed.

  6. Inelastic neutron scattering spectrum of cyclotrimethylenetrinitramine: a comparison with solid-state electronic structure calculations.

    PubMed

    Ciezak, Jennifer A; Trevino, S F

    2006-04-20

    Solid-state geometry optimizations and corresponding normal-mode analysis of the widely used energetic material cyclotrimethylenetrinitramine (RDX) were performed using density functional theory with both the generalized gradient approximation (BLYP and BP functionals) and the local density approximation (PWC and VWN functionals). The structural results were found to be in good agreement with experimental neutron diffraction data and previously reported calculations based on the isolated-molecule approximation. The vibrational inelastic neutron scattering (INS) spectrum of polycrystalline RDX was measured and compared with simulated INS constructed from the solid-state calculations. The vibrational frequencies calculated from the solid-state methods had average deviations of 10 cm(-1) or less, whereas previously published frequencies based on an isolated-molecule approximation had deviations of 65 cm(-1) or less, illustrating the importance of including crystalline forces. On the basis of the calculations and analysis, it was possible to assign the normal modes and symmetries, which agree well with previous assignments. Four possible "doorway modes" were found in the energy range defined by the lattice modes, which were all found to contain fundamental contributions from rotation of the nitro groups.

  7. The role of under-determined approximations in engineering and science application

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1992-01-01

    There is currently a great deal of interest in using response surfaces in the optimization of aircraft performance. The objective function and/or constraint equations involved in these optimization problems may come from numerous disciplines such as structures, aerodynamics, environmental engineering, etc. In each of these disciplines, the mathematical complexity of the governing equations usually dictates that numerical results be obtained from large computer programs such as a finite element method program. Thus, when performing optimization studies, response surfaces are a convenient way of transferring information from the various disciplines to the optimization algorithm as opposed to bringing all the sundry computer programs together in a massive computer code. Response surfaces offer another advantage in the optimization of aircraft structures. A characteristic of these types of optimization problems is that evaluation of the objective function and response equations (referred to as a functional evaluation) can be very expensive in a computational sense. Because of the computational expense in obtaining functional evaluations, the present study was undertaken to investigate under-determinined approximations. An under-determined approximation is one in which there are fewer training pairs (pieces of information about a function) than there are undetermined parameters (coefficients or weights) associated with the approximation. Both polynomial approximations and neural net approximations were examined. Three main example problems were investigated: (1) a function of one design variable was considered; (2) a function of two design variables was considered; and (3) a 35 bar truss with 4 design variables was considered.

  8. Bayesian Integration of Information in Hippocampal Place Cells

    PubMed Central

    Madl, Tamas; Franklin, Stan; Chen, Ke; Montaldi, Daniela; Trappl, Robert

    2014-01-01

    Accurate spatial localization requires a mechanism that corrects for errors, which might arise from inaccurate sensory information or neuronal noise. In this paper, we propose that Hippocampal place cells might implement such an error correction mechanism by integrating different sources of information in an approximately Bayes-optimal fashion. We compare the predictions of our model with physiological data from rats. Our results suggest that useful predictions regarding the firing fields of place cells can be made based on a single underlying principle, Bayesian cue integration, and that such predictions are possible using a remarkably small number of model parameters. PMID:24603429

  9. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1984-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589

  10. Spline approximations for nonlinear hereditary control systems

    NASA Technical Reports Server (NTRS)

    Daniel, P. L.

    1982-01-01

    A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

  11. Locality of correlation in density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, Kieron; Cancio, Antonio; Gould, Tim

    The Hohenberg-Kohn density functional was long ago shown to reduce to the Thomas-Fermi (TF) approximation in the non-relativistic semiclassical (or large-Z) limit for all matter, i.e., the kinetic energy becomes local. Exchange also becomes local in this limit. Numerical data on the correlation energy of atoms support the conjecture that this is also true for correlation, but much less relevant to atoms. We illustrate how expansions around a large particle number are equivalent to local density approximations and their strong relevance to density functional approximations. Analyzing highly accurate atomic correlation energies, we show that E{sub C} → −A{sub C} ZlnZ +more » B{sub C}Z as Z → ∞, where Z is the atomic number, A{sub C} is known, and we estimate B{sub C} to be about 37 mhartree. The local density approximation yields A{sub C} exactly, but a very incorrect value for B{sub C}, showing that the local approximation is less relevant for the correlation alone. This limit is a benchmark for the non-empirical construction of density functional approximations. We conjecture that, beyond atoms, the leading correction to the local density approximation in the large-Z limit generally takes this form, but with B{sub C} a functional of the TF density for the system. The implications for the construction of approximate density functionals are discussed.« less

  12. Multidisciplinary Design Optimization for Aeropropulsion Engines and Solid Modeling/Animation via the Integrated Forced Methods

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The grant closure report is organized in the following four chapters: Chapter describes the two research areas Design optimization and Solid mechanics. Ten journal publications are listed in the second chapter. Five highlights is the subject matter of chapter three. CHAPTER 1. The Design Optimization Test Bed CometBoards. CHAPTER 2. Solid Mechanics: Integrated Force Method of Analysis. CHAPTER 3. Five Highlights: Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft. Neural Network and Regression Soft Model Extended for PX-300 Aircraft Engine. Engine with Regression and Neural Network Approximators Designed. Cascade Optimization Strategy with Neural network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design. Neural Network and Regression Approximations Used in Aircraft Design.

  13. Approximate approach for optimization space flights with a low thrust on the basis of sufficient optimality conditions

    NASA Astrophysics Data System (ADS)

    Salmin, Vadim V.

    2017-01-01

    Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.

  14. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  15. The dynamics and control of large flexible space structures, 3. Part A: Shape and orientation control of a platform in orbit using point actuators

    NASA Technical Reports Server (NTRS)

    Bainum, P. M.; Reddy, A. S. S. R.; Krishna, R.; James, P. K.

    1980-01-01

    The dynamics, attitude, and shape control of a large thin flexible square platform in orbit are studied. Attitude and shape control are assumed to result from actuators placed perpendicular to the main surface and one edge and their effect on the rigid body and elastic modes is modelled to first order. The equations of motion are linearized about three different nominal orientations: (1) the platform following the local vertical with its major surface perpendicular to the orbital plane; (2) the platform following the local horizontal with its major surface normal to the local vertical; and (3) the platform following the local vertical with its major surface perpendicular to the orbit normal. The stability of the uncontrolled system is investigated analytically. Once controllability is established for a set of actuator locations, control law development is based on decoupling, pole placement, and linear optimal control theory. Frequencies and elastic modal shape functions are obtained using a finite element computer algorithm, two different approximate analytical methods, and the results of the three methods compared.

  16. Probing the localization of magnetic dichroism by atomic-size astigmatic and vortex electron beams

    DOE PAGES

    Negi, Devendra Singh; Idrobo, Juan Carlos; Rusz, Ján

    2018-03-05

    We report localization of a magnetic dichroic signal on atomic columns in electron magnetic circular dichroism (EMCD), probed by beam distorted by four-fold astigmatism and electron vortex beam. With astigmatic probe, magnetic signal to noise ratio can be enhanced by blocking the intensity from the central part of probe. However, the simulations show that for atomic resolution magnetic measurements, vortex beam is a more effective probe, with much higher magnetic signal to noise ratio. For all considered beam shapes, the optimal SNR constrains the signal detection at low collection angles of approximately 6–8 mrad. Irrespective of the material thickness, themore » magnetic signal remains strongly localized within the probed atomic column with vortex beam, whereas for astigmatic probes, the magnetic signal originates mostly from the nearest neighbor atomic columns. Due to excellent signal localization at probing individual atomic columns, vortex beams are predicted to be a strong candidate for studying the crystal site specific magnetic properties, magnetic properties at interfaces, or magnetism arising from individual atomic impurities.« less

  17. Probing the localization of magnetic dichroism by atomic-size astigmatic and vortex electron beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negi, Devendra Singh; Idrobo, Juan Carlos; Rusz, Ján

    We report localization of a magnetic dichroic signal on atomic columns in electron magnetic circular dichroism (EMCD), probed by beam distorted by four-fold astigmatism and electron vortex beam. With astigmatic probe, magnetic signal to noise ratio can be enhanced by blocking the intensity from the central part of probe. However, the simulations show that for atomic resolution magnetic measurements, vortex beam is a more effective probe, with much higher magnetic signal to noise ratio. For all considered beam shapes, the optimal SNR constrains the signal detection at low collection angles of approximately 6–8 mrad. Irrespective of the material thickness, themore » magnetic signal remains strongly localized within the probed atomic column with vortex beam, whereas for astigmatic probes, the magnetic signal originates mostly from the nearest neighbor atomic columns. Due to excellent signal localization at probing individual atomic columns, vortex beams are predicted to be a strong candidate for studying the crystal site specific magnetic properties, magnetic properties at interfaces, or magnetism arising from individual atomic impurities.« less

  18. A Closer look on Ineffectiveness in Riau Mainland Expenditure: Local Government Budget Case

    NASA Astrophysics Data System (ADS)

    Yandra, Alexsander; Roserdevi Nasution, Sri; Harsini; Wardi, Jeni

    2018-05-01

    this study discussed about the issues on ineffectiveness of expenditure by one Indonesia local Government in Riau. This Provence were amounted Rp. 10.7 trillion through Local Government Budget (APBD) in 2015. According to data from Financial Management Board and Regions Assets (BPKAD) APBD Riau in 2015 stood at approximately 37.58% until October 2015,another data taken from the Ministry of Home Affairs, Riau regional budget, from January to December 2015, it shows the lowest in Indonesia which amounted to 59.6%. The percentage described the lacking implementation of the budget, this can be interpreted that Riau government is less optimal and irrelevant in spending the budget in 2015. Through a theoretical approach to government spending, the implementation of public policies showed the ineffectiveness of the budget that have implicated towards regional development. As regional budget is only the draft in achieving the targets. Budget management in 2015 by the provincial administration through the Local Government Unit (SKPD) shows unsynchronized between the Medium Term Development Plan with the work program from SKPD.

  19. Solving bi-level optimization problems in engineering design using kriging models

    NASA Astrophysics Data System (ADS)

    Xia, Yi; Liu, Xiaojie; Du, Gang

    2018-05-01

    Stackelberg game-theoretic approaches are applied extensively in engineering design to handle distributed collaboration decisions. Bi-level genetic algorithms (BLGAs) and response surfaces have been used to solve the corresponding bi-level programming models. However, the computational costs for BLGAs often increase rapidly with the complexity of lower-level programs, and optimal solution functions sometimes cannot be approximated by response surfaces. This article proposes a new method, namely the optimal solution function approximation by kriging model (OSFAKM), in which kriging models are used to approximate the optimal solution functions. A detailed example demonstrates that OSFAKM can obtain better solutions than BLGAs and response surface-based methods, and at the same time reduce the workload of computation remarkably. Five benchmark problems and a case study of the optimal design of a thin-walled pressure vessel are also presented to illustrate the feasibility and potential of the proposed method for bi-level optimization in engineering design.

  20. Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched Disturbances.

    PubMed

    Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang

    2018-06-01

    Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.

  1. LES of a Jet Excited by the Localized Arc Filament Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2011-01-01

    The fluid dynamics of a high-speed jet are governed by the instability waves that form in the free-shear boundary layer of the jet. Jet excitation manipulates the growth and saturation of particular instability waves to control the unsteady flow structures that characterize the energy cascade in the jet.The results may include jet noise mitigation or a reduction in the infrared signature of the jet. The Localized Arc Filament Plasma Actuators (LAFPA) have demonstrated the ability to excite a high-speed jets in laboratory experiments. Extending and optimizing this excitation technology, however, is a complex process that will require many tests and trials. Computational simulations can play an important role in understanding and optimizing this actuator technology for real-world applications. Previous research has focused on developing a suitable actuator model and coupling it with the appropriate computational fluid dynamics (CFD) methods using two-dimensional spatial flow approximations. This work is now extended to three-dimensions (3-D) in space. The actuator model is adapted to a series of discrete actuators and a 3-D LES simulation of an excited jet is run. The results are used to study the fluid dynamics near the actuator and in the jet plume.

  2. Adaptation of the projector-augmented-wave formalism to the treatment of orbital-dependent exchange-correlation functionals

    NASA Astrophysics Data System (ADS)

    Xu, Xiao; Holzwarth, N. A. W.

    2011-10-01

    This paper presents the formulation and numerical implementation of a self-consistent treatment of orbital-dependent exchange-correlation functionals within the projector-augmented-wave method of Blöchl [Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.50.17953 50, 17953 (1994)] for electronic structure calculations. The methodology is illustrated with binding energy curves for C in the diamond structure and LiF in the rock salt structure, by comparing results from the Hartree-Fock (HF) formalism and the optimized effective potential formalism in the so-called KLI approximation [Krieger, Li, and Iafrate, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.45.101 45, 101 (1992)] with those of the local density approximation. While the work here uses pure Fock exchange only, the formalism can be extended to treat orbital-dependent functionals more generally.

  3. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  4. DQE and system optimization for indirect-detection flat-panel imagers in diagnostic radiology

    NASA Astrophysics Data System (ADS)

    Siewerdsen, Jeffrey H.; Antonuk, Larry E.

    1998-07-01

    The performance of indirect-detection flat-panel imagers incorporating CsI:Tl x-ray converters is examined through calculation of the detective quantum efficiency (DQE) under conditions of chest radiography, fluoroscopy, and mammography. Calculations are based upon a cascaded systems model which has demonstrated excellent agreement with empirical signal, noise- power spectra, and DQE results. For each application, the DQE is calculated as a function of spatial-frequency and CsI:Tl thickness. A preliminary investigation into the optimization of flat-panel imaging systems is described, wherein the x-ray converter thickness which provides optimal DQE for a given imaging task is estimated. For each application, a number of example tasks involving detection of an object of variable size and contrast against a noisy background are considered. The method described is fairly general and can be extended to account for a variety of imaging tasks. For the specific examples considered, the preliminary results estimate optimal CsI:Tl thicknesses of approximately 450 micrometer (approximately 200 mg/cm2), approximately 320 micrometer (approximately 140 mg/cm2), and approximately 200 micrometer (approximately 90 mg/cm2) for chest radiography, fluoroscopy, and mammography, respectively. These results are expected to depend upon the imaging task as well as upon the quality of available CsI:Tl, and future improvements in scintillator fabrication could result in increased optimal thickness and DQE.

  5. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  6. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  7. 3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.

    2017-04-01

    Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.

  8. Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.

    PubMed

    Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei

    2018-06-01

    This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.

  9. Identification of the optimal spectral region for plasmonic and nanoplasmonic sensing.

    PubMed

    Otte, Marinus A; Sepúlveda, Borja; Ni, Weihai; Juste, Jorge Pérez; Liz-Marzán, Luis M; Lechuga, Laura M

    2010-01-26

    We present a theoretical and experimental study involving the sensing characteristics of wavelength-interrogated plasmonic sensors based on surface plasmon polaritons (SPP) in planar gold films and on localized surface plasmon resonances (LSPR) of single gold nanorods. The tunability of both sensing platforms allowed us to analyze their bulk and surface sensing characteristics as a function of the plasmon resonance position. We demonstrate that a general figure of merit (FOM), which is equivalent in wavelength and energy scales, can be employed to mutually compare both sensing schemes. Most interestingly, this FOM has revealed a spectral region for which the surface sensitivity performance of both sensor types is optimized, which we attribute to the intrinsic dielectric properties of plasmonic materials. Additionally, in good agreement with theoretical predictions, we experimentally demonstrate that, although the SPP sensor offers a much better bulk sensitivity, the LSPR sensor shows an approximately 15% better performance for surface sensitivity measurements when its FOM is optimized. However, optimization of the substrate refractive index and the accessibility of the relevant molecules to the nanoparticles can lead to a total 3-fold improvement of the FOM in LSPR sensors.

  10. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  11. Dielectric function for doped graphene layer with barium titanate

    NASA Astrophysics Data System (ADS)

    Martinez Ramos, Manuel; Garces Garcia, Eric; Magana, Fernado; Vazquez Fonseca, Gerardo Jorge

    2015-03-01

    The aim of our study is to calculate the dielectric function for a system formed with a graphene layer doped with barium titanate. Density functional theory, within the local density approximation, plane-waves and pseudopotentials scheme as implemented in Quantum Espresso suite of programs was used. We considered 128 carbon atoms with a barium titanate cluster of 11 molecules as unit cell with periodic conditions. The geometry optimization is achieved. Optimization of structural configuration is performed by relaxation of all atomic positions to minimize their total energies. Band structure, density of states and linear optical response (the imaginary part of dielectric tensor) were calculated. We thank Dirección General de Asuntos del Personal Académico de la Universidad Nacional Autónoma de México, partial financial support by Grant IN-106514 and we also thank Miztli Super-Computing center the technical assistance.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, E; Yuan, F; Templeton, A

    Purpose: The ultimate goal of radiotherapy treatment planning is to find a treatment that will yield a high tumor-control-probability(TCP) with an acceptable normal-tissue-complication probability(NTCP). Yet most treatment planning today is not based upon optimization of TCPs and NTCPs, but rather upon meeting physical dose and volume constraints defined by the planner. We design treatment plans that optimize TCP directly and contrast them with the clinical dose-based plans. PET image is incorporated to evaluate gain in TCP for dose escalation. Methods: We build a nonlinear mixed integer programming optimization model that maximizes TCP directly while satisfying the dose requirements on themore » targeted organ and healthy tissues. The solution strategy first fits the TCP function with a piecewise-linear approximation, then solves the problem that maximizes the piecewise linear approximation of TCP, and finally performs a local neighborhood search to improve the TCP value. To gauge the feasibility, characteristics, and potential benefit of PET-image guided dose escalation, initial validation consists of fifteen cervical cancer HDR patient cases. These patients have all received prior 45Gy of external radiation dose. For both escalated strategies, we consider 35Gy PTV-dose, and two variations (37Gy-boost to BTV vs 40Gy-boost) to PET-image-pockets. Results: TCP for standard clinical plans range from 59.4% - 63.6%. TCP for dose-based PET-guided escalated-dose-plan ranges from 63.8%–98.6% for all patients; whereas TCP-optimized plans achieves over 91% for all patients. There is marginal difference in TCP among those with 37Gy-boosted vs 40Gy-boosted. There is no increase in rectum and bladder dose among all plans. Conclusion: Optimizing TCP directly results in highly conformed treatment plans. The TCP-optimized plan is individualized based on the biological PET-image of the patients. The TCP-optimization framework is generalizable and has been applied successfully to other external-beam delivery modalities. A clinical trial is on-going to gauge the clinical significance. Partially supported by the National Science Foundation.« less

  13. Multicomponent pre-stack seismic waveform inversion in transversely isotropic media using a non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Padhi, Amit; Mallick, Subhashis

    2014-03-01

    Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.

  14. Gradient design for liquid chromatography using multi-scale optimization.

    PubMed

    López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C

    2018-01-26

    In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ  ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Progress in navigation filter estimate fusion and its application to spacecraft rendezvous

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1994-01-01

    A new derivation of an algorithm which fuses the outputs of two Kalman filters is presented within the context of previous research in this field. Unlike other works, this derivation clearly shows the combination of estimates to be optimal, minimizing the trace of the fused covariance matrix. The algorithm assumes that the filters use identical models, and are stable and operating optimally with respect to their own local measurements. Evidence is presented which indicates that the error ellipsoid derived from the covariance of the optimally fused estimate is contained within the intersections of the error ellipsoids of the two filters being fused. Modifications which reduce the algorithm's data transmission requirements are also presented, including a scalar gain approximation, a cross-covariance update formula which employs only the two contributing filters' autocovariances, and a form of the algorithm which can be used to reinitialize the two Kalman filters. A sufficient condition for using the optimally fused estimates to periodically reinitialize the Kalman filters in this fashion is presented and proved as a theorem. When these results are applied to an optimal spacecraft rendezvous problem, simulated performance results indicate that the use of optimally fused data leads to significantly improved robustness to initial target vehicle state errors. The following applications of estimate fusion methods to spacecraft rendezvous are also described: state vector differencing, and redundancy management.

  16. Progressive sparse representation-based classification using local discrete cosine transform evaluation for image recognition

    NASA Astrophysics Data System (ADS)

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong

    2015-09-01

    This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.

  17. On optimal strategies in event-constrained differential games

    NASA Technical Reports Server (NTRS)

    Heymann, M.; Rajan, N.; Ardema, M.

    1985-01-01

    Combat games are formulated as zero-sum differential games with unilateral event constraints. An interior penalty function approach is employed to approximate optimal strategies for the players. The method is very attractive computationally and possesses suitable approximation and convergence properties.

  18. Optimization of selected molecular orbitals in group basis sets.

    PubMed

    Ferenczy, György G; Adams, William H

    2009-04-07

    We derive a local basis equation which may be used to determine the orbitals of a group of electrons in a system when the orbitals of that group are represented by a group basis set, i.e., not the basis set one would normally use but a subset suited to a specific electronic group. The group orbitals determined by the local basis equation minimize the energy of a system when a group basis set is used and the orbitals of other groups are frozen. In contrast, under the constraint of a group basis set, the group orbitals satisfying the Huzinaga equation do not minimize the energy. In a test of the local basis equation on HCl, the group basis set included only 12 of the 21 functions in a basis set one might ordinarily use, but the calculated active orbital energies were within 0.001 hartree of the values obtained by solving the Hartree-Fock-Roothaan (HFR) equation using all 21 basis functions. The total energy found was just 0.003 hartree higher than the HFR value. The errors with the group basis set approximation to the Huzinaga equation were larger by over two orders of magnitude. Similar results were obtained for PCl(3) with the group basis approximation. Retaining more basis functions allows an even higher accuracy as shown by the perfect reproduction of the HFR energy of HCl with 16 out of 21 basis functions in the valence basis set. When the core basis set was also truncated then no additional error was introduced in the calculations performed for HCl with various basis sets. The same calculations with fixed core orbitals taken from isolated heavy atoms added a small error of about 10(-4) hartree. This offers a practical way to calculate wave functions with predetermined fixed core and reduced base valence orbitals at reduced computational costs. The local basis equation can also be used to combine the above approximations with the assignment of local basis sets to groups of localized valence molecular orbitals and to derive a priori localized orbitals. An appropriately chosen localization and basis set assignment allowed a reproduction of the energy of n-hexane with an error of 10(-5) hartree, while the energy difference between its two conformers was reproduced with a similar accuracy for several combinations of localizations and basis set assignments. These calculations include localized orbitals extending to 4-5 heavy atoms and thus they require to solve reduced dimension secular equations. The dimensions are not expected to increase with increasing system size and thus the local basis equation may find use in linear scaling electronic structure calculations.

  19. Zero-sum two-player game theoretic formulation of affine nonlinear discrete-time systems using neural networks.

    PubMed

    Mehraeen, Shahab; Dierks, Travis; Jagannathan, S; Crow, Mariesa L

    2013-12-01

    In this paper, the nearly optimal solution for discrete-time (DT) affine nonlinear control systems in the presence of partially unknown internal system dynamics and disturbances is considered. The approach is based on successive approximate solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which appears in optimal control. Successive approximation approach for updating control and disturbance inputs for DT nonlinear affine systems are proposed. Moreover, sufficient conditions for the convergence of the approximate HJI solution to the saddle point are derived, and an iterative approach to approximate the HJI equation using a neural network (NN) is presented. Then, the requirement of full knowledge of the internal dynamics of the nonlinear DT system is relaxed by using a second NN online approximator. The result is a closed-loop optimal NN controller via offline learning. A numerical example is provided illustrating the effectiveness of the approach.

  20. A Numerical Approximation Framework for the Stochastic Linear Quadratic Regulator on Hilbert Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levajković, Tijana, E-mail: tijana.levajkovic@uibk.ac.at, E-mail: t.levajkovic@sf.bg.ac.rs; Mena, Hermann, E-mail: hermann.mena@uibk.ac.at; Tuffaha, Amjad, E-mail: atufaha@aus.edu

    We present an approximation framework for computing the solution of the stochastic linear quadratic control problem on Hilbert spaces. We focus on the finite horizon case and the related differential Riccati equations (DREs). Our approximation framework is concerned with the so-called “singular estimate control systems” (Lasiecka in Optimal control problems and Riccati equations for systems with unbounded controls and partially analytic generators: applications to boundary and point control problems, 2004) which model certain coupled systems of parabolic/hyperbolic mixed partial differential equations with boundary or point control. We prove that the solutions of the approximate finite-dimensional DREs converge to the solutionmore » of the infinite-dimensional DRE. In addition, we prove that the optimal state and control of the approximate finite-dimensional problem converge to the optimal state and control of the corresponding infinite-dimensional problem.« less

  1. Pricing of swing options: A Monte Carlo simulation approach

    NASA Astrophysics Data System (ADS)

    Leow, Kai-Siong

    We study the problem of pricing swing options, a class of multiple early exercise options that are traded in energy market, particularly in the electricity and natural gas markets. These contracts permit the option holder to periodically exercise the right to trade a variable amount of energy with a counterparty, subject to local volumetric constraints. In addition, the total amount of energy traded from settlement to expiration with the counterparty is restricted by a global volumetric constraint. Violation of this global volumetric constraint is allowed but would lead to penalty settled at expiration. The pricing problem is formulated as a stochastic optimal control problem in discrete time and state space. We present a stochastic dynamic programming algorithm which is based on piecewise linear concave approximation of value functions. This algorithm yields the value of the swing option under the assumption that the optimal exercise policy is applied by the option holder. We present a proof of an almost sure convergence that the algorithm generates the optimal exercise strategy as the number of iterations approaches to infinity. Finally, we provide a numerical example for pricing a natural gas swing call option.

  2. A New Maneuver for Escape Trajectories

    NASA Technical Reports Server (NTRS)

    Adams, Robert B.

    2008-01-01

    This presentation put forth a new maneuver for escape trajectories and specifically sought to find an analytical approximation for medium thrust trajectories. In most low thrust derivations the idea is that escape velocity is best achieved by accelerating along the velocity vector. The reason for this is that change in specific orbital energy is a function of velocity and acceleration. However, Levin (1952) suggested that while this is a locally optimal solution it might not be a globally optimal one. Turning acceleration inward would drop periapse giving a higher velocity later in the trajectory. Acceleration at that point would be dotted against a higher magnitude V giving a greater rate of change of mechanical energy. The author then hypothesized that decelerating from the initial orbit and then accelerating at periapse would not lead to a gain in greater specific orbital energy--however, the hypothesis was incorrect. After considerable derivation it was determined that this new maneuver outperforms a direct burn when the overall DeltaV budget exceeds the initial orbital velocity (the author has termed this the Heinlein maneuver). The author provides a physical explanation for this maneuver and presents optimization analyses.

  3. Well-temperate phage: Optimal bet-hedging against local environmental collapses

    DOE PAGES

    Maslov, Sergei; Sneppen, Kim

    2015-06-02

    Upon infection of their bacterial hosts temperate phages must chose between lysogenic and lytic developmental strategies. Here we apply the game-theoretic bet-hedging strategy introduced by Kelly to derive the optimal lysogenic fraction of the total population of phages as a function of frequency and intensity of environmental downturns affecting the lytic subpopulation. “Well-temperate” phage from our title is characterized by the best long-term population growth rate. We show that it is realized when the lysogenization frequency is approximately equal to the probability of lytic population collapse. We further predict the existence of sharp boundaries in system’s environmental, ecological, and biophysicalmore » parameters separating the regions where this temperate strategy is optimal from those dominated by purely virulent or dormant (purely lysogenic) strategies. We show that the virulent strategy works best for phages with large diversity of hosts, and access to multiple independent environments reachable by diffusion. Conversely, progressively more temperate or even dormant strategies are favored in the environments, that are subject to frequent and severe temporal downturns.« less

  4. Well-temperate phage: Optimal bet-hedging against local environmental collapses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maslov, Sergei; Sneppen, Kim

    Upon infection of their bacterial hosts temperate phages must chose between lysogenic and lytic developmental strategies. Here we apply the game-theoretic bet-hedging strategy introduced by Kelly to derive the optimal lysogenic fraction of the total population of phages as a function of frequency and intensity of environmental downturns affecting the lytic subpopulation. “Well-temperate” phage from our title is characterized by the best long-term population growth rate. We show that it is realized when the lysogenization frequency is approximately equal to the probability of lytic population collapse. We further predict the existence of sharp boundaries in system’s environmental, ecological, and biophysicalmore » parameters separating the regions where this temperate strategy is optimal from those dominated by purely virulent or dormant (purely lysogenic) strategies. We show that the virulent strategy works best for phages with large diversity of hosts, and access to multiple independent environments reachable by diffusion. Conversely, progressively more temperate or even dormant strategies are favored in the environments, that are subject to frequent and severe temporal downturns.« less

  5. Capture of near-Earth objects with low-thrust propulsion and invariant manifolds

    NASA Astrophysics Data System (ADS)

    Tang, Gao; Jiang, Fanghua

    2016-01-01

    In this paper, a mission incorporating low-thrust propulsion and invariant manifolds to capture near-Earth objects (NEOs) is investigated. The initial condition has the spacecraft rendezvousing with the NEO. The mission terminates once it is inserted into a libration point orbit (LPO). The spacecraft takes advantage of stable invariant manifolds for low-energy ballistic capture. Low-thrust propulsion is employed to retrieve the joint spacecraft-asteroid system. Global optimization methods are proposed for the preliminary design. Local direct and indirect methods are applied to optimize the two-impulse transfers. Indirect methods are implemented to optimize the low-thrust trajectory and estimate the largest retrievable mass. To overcome the difficulty that arises from bang-bang control, a homotopic approach is applied to find an approximate solution. By detecting the switching moments of the bang-bang control the efficiency and accuracy of numerical integration are guaranteed. By using the homotopic approach as the initial guess the shooting function is easy to solve. The relationship between the maximum thrust and the retrieval mass is investigated. We find that both numerically and theoretically a larger thrust is preferred.

  6. Classification-Assisted Memetic Algorithms for Equality-Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Handoko, Stephanus Daniel; Kwoh, Chee Keong; Ong, Yew Soon

    Regressions has successfully been incorporated into memetic algorithm (MA) to build surrogate models for the objective or constraint landscape of optimization problems. This helps to alleviate the needs for expensive fitness function evaluations by performing local refinements on the approximated landscape. Classifications can alternatively be used to assist MA on the choice of individuals that would experience refinements. Support-vector-assisted MA were recently proposed to alleviate needs for function evaluations in the inequality-constrained optimization problems by distinguishing regions of feasible solutions from those of the infeasible ones based on some past solutions such that search efforts can be focussed on some potential regions only. For problems having equality constraints, however, the feasible space would obviously be extremely small. It is thus extremely difficult for the global search component of the MA to produce feasible solutions. Hence, the classification of feasible and infeasible space would become ineffective. In this paper, a novel strategy to overcome such limitation is proposed, particularly for problems having one and only one equality constraint. The raw constraint value of an individual, instead of its feasibility class, is utilized in this work.

  7. Gradient optimization of finite projected entangled pair states

    NASA Astrophysics Data System (ADS)

    Liu, Wen-Yuan; Dong, Shao-Jun; Han, Yong-Jian; Guo, Guang-Can; He, Lixin

    2017-05-01

    Projected entangled pair states (PEPS) methods have been proven to be powerful tools to solve strongly correlated quantum many-body problems in two dimensions. However, due to the high computational scaling with the virtual bond dimension D , in a practical application, PEPS are often limited to rather small bond dimensions, which may not be large enough for some highly entangled systems, for instance, frustrated systems. Optimization of the ground state using the imaginary time evolution method with a simple update scheme may go to a larger bond dimension. However, the accuracy of the rough approximation to the environment of the local tensors is questionable. Here, we demonstrate that by combining the imaginary time evolution method with a simple update, Monte Carlo sampling techniques and gradient optimization will offer an efficient method to calculate the PEPS ground state. By taking advantage of massive parallel computing, we can study quantum systems with larger bond dimensions up to D =10 without resorting to any symmetry. Benchmark tests of the method on the J1-J2 model give impressive accuracy compared with exact results.

  8. An adjoint method for gradient-based optimization of stellarator coil shapes

    NASA Astrophysics Data System (ADS)

    Paul, E. J.; Landreman, M.; Bader, A.; Dorland, W.

    2018-07-01

    We present a method for stellarator coil design via gradient-based optimization of the coil-winding surface. The REGCOIL (Landreman 2017 Nucl. Fusion 57 046003) approach is used to obtain the coil shapes on the winding surface using a continuous current potential. We apply the adjoint method to calculate derivatives of the objective function, allowing for efficient computation of analytic gradients while eliminating the numerical noise of approximate derivatives. We are able to improve engineering properties of the coils by targeting the root-mean-squared current density in the objective function. We obtain winding surfaces for W7-X and HSX which simultaneously decrease the normal magnetic field on the plasma surface and increase the surface-averaged distance between the coils and the plasma in comparison with the actual winding surfaces. The coils computed on the optimized surfaces feature a smaller toroidal extent and curvature and increased inter-coil spacing. A technique for computation of the local sensitivity of figures of merit to normal displacements of the winding surface is presented, with potential applications for understanding engineering tolerances.

  9. Global optimization method based on ray tracing to achieve optimum figure error compensation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin

    2017-02-01

    Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.

  10. Magnetic hyperthermia properties of nanoparticles inside lysosomes using kinetic Monte Carlo simulations: Influence of key parameters and dipolar interactions, and evidence for strong spatial variation of heating power

    NASA Astrophysics Data System (ADS)

    Tan, R. P.; Carrey, J.; Respaud, M.

    2014-12-01

    Understanding the influence of dipolar interactions in magnetic hyperthermia experiments is of crucial importance for fine optimization of nanoparticle (NP) heating power. In this study we use a kinetic Monte Carlo algorithm to calculate hysteresis loops that correctly account for both time and temperature. This algorithm is shown to correctly reproduce the high-frequency hysteresis loop of both superparamagnetic and ferromagnetic NPs without any ad hoc or artificial parameters. The algorithm is easily parallelizable with a good speed-up behavior, which considerably decreases the calculation time on several processors and enables the study of assemblies of several thousands of NPs. The specific absorption rate (SAR) of magnetic NPs dispersed inside spherical lysosomes is studied as a function of several key parameters: volume concentration, applied magnetic field, lysosome size, NP diameter, and anisotropy. The influence of these parameters is illustrated and comprehensively explained. In summary, magnetic interactions increase the coercive field, saturation field, and hysteresis area of major loops. However, for small amplitude magnetic fields such as those used in magnetic hyperthermia, the heating power as a function of concentration can increase, decrease, or display a bell shape, depending on the relationship between the applied magnetic field and the coercive/saturation fields of the NPs. The hysteresis area is found to be well correlated with the parallel or antiparallel nature of the dipolar field acting on each particle. The heating power of a given NP is strongly influenced by a local concentration involving approximately 20 neighbors. Because this local concentration strongly decreases upon approaching the surface, the heating power increases or decreases in the vicinity of the lysosome membrane. The amplitude of variation reaches more than one order of magnitude in certain conditions. This transition occurs on a thickness corresponding to approximately 1.3 times the mean distance between two neighbors. The amplitude and sign of this variation is explained. Finally, implications of these various findings are discussed in the framework of magnetic hyperthermia optimization. It is concluded that feedback on two specific points from biology experiments is required for further advancement of the optimization of magnetic NPs for magnetic hyperthermia. The present simulations will be an advantageous tool to optimize magnetic NPs heating power and interpret experimental results.

  11. Policy Iteration for $H_\\infty $ Optimal Control of Polynomial Nonlinear Systems via Sum of Squares Programming.

    PubMed

    Zhu, Yuanheng; Zhao, Dongbin; Yang, Xiong; Zhang, Qichao

    2018-02-01

    Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.

  12. Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Wang, C.

    1989-01-01

    A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.

  13. Fast approximation for joint optimization of segmentation, shape, and location priors, and its application in gallbladder segmentation.

    PubMed

    Saito, Atsushi; Nawano, Shigeru; Shimizu, Akinobu

    2017-05-01

    This paper addresses joint optimization for segmentation and shape priors, including translation, to overcome inter-subject variability in the location of an organ. Because a simple extension of the previous exact optimization method is too computationally complex, we propose a fast approximation for optimization. The effectiveness of the proposed approximation is validated in the context of gallbladder segmentation from a non-contrast computed tomography (CT) volume. After spatial standardization and estimation of the posterior probability of the target organ, simultaneous optimization of the segmentation, shape, and location priors is performed using a branch-and-bound method. Fast approximation is achieved by combining sampling in the eigenshape space to reduce the number of shape priors and an efficient computational technique for evaluating the lower bound. Performance was evaluated using threefold cross-validation of 27 CT volumes. Optimization in terms of translation of the shape prior significantly improved segmentation performance. The proposed method achieved a result of 0.623 on the Jaccard index in gallbladder segmentation, which is comparable to that of state-of-the-art methods. The computational efficiency of the algorithm is confirmed to be good enough to allow execution on a personal computer. Joint optimization of the segmentation, shape, and location priors was proposed, and it proved to be effective in gallbladder segmentation with high computational efficiency.

  14. Laboratory Performance of the Shaped Pupil Coronagraphic Architecture for the WFIRST-AFTA Coronagraph

    NASA Technical Reports Server (NTRS)

    Cady, Eric; Mejia Prada, Camilo; An, Xin; Balasubramanian, Kunjithapatha; Diaz, Rosemary; Kasdin, N. Jeremy; Kern, Brian; Kuhnert, Andreas; Nemati, Bijan; Patterson, Keith; hide

    2015-01-01

    One of the two primary architectures being tested for the WFIRST-AFTA coronagraph instrument is the shaped pupil coronagraph, which uses a binary aperture in a pupil plane to create localized regions of high contrast in a subsequent focal plane. The aperture shapes are determined by optimization, and can be designed to work in the presence of secondary obscurations and spiders-an important consideration for coronagraphy with WFIRSTAFTA. We present the current performance of the shaped pupil testbed, including the results of AFTA Milestone 2, in which approximately 6 × 10(exp -9) contrast was achieved in three independent runs starting from a neutral setting.

  15. Reconstruction of local perturbations in periodic surfaces

    NASA Astrophysics Data System (ADS)

    Lechleiter, Armin; Zhang, Ruming

    2018-03-01

    This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.

  16. Hyperbolic metamaterial lens with hydrodynamic nonlocal response.

    PubMed

    Yan, Wei; Mortensen, N Asger; Wubs, Martijn

    2013-06-17

    We investigate the effects of hydrodynamic nonlocal response in hyperbolic metamaterials (HMMs), focusing on the experimentally realizable parameter regime where unit cells are much smaller than an optical wavelength but much larger than the wavelengths of the longitudinal pressure waves of the free-electron plasma in the metal constituents. We derive the nonlocal corrections to the effective material parameters analytically, and illustrate the noticeable nonlocal effects on the dispersion curves numerically. As an application, we find that the focusing characteristics of a HMM lens in the local-response approximation and in the hydrodynamic Drude model can differ considerably. In particular, the optimal frequency for imaging in the nonlocal theory is blueshifted with respect to that in the local theory. Thus, to detect whether nonlocal response is at work in a hyperbolic metamaterial, we propose to measure the near-field distribution of a hyperbolic metamaterial lens.

  17. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish

    PubMed Central

    Jun, James Jaeyoon; Longtin, André; Maler, Leonard

    2013-01-01

    In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source localization. PMID:23805244

  18. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    PubMed

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  19. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    PubMed Central

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-01-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060

  20. Contributions on Optimizing Approximations in the Study of Melting and Solidification Processes That Occur in Processing by Electro-Erosion

    NASA Astrophysics Data System (ADS)

    Potra, F. L.; Potra, T.; Soporan, V. F.

    We propose two optimization methods of the processes which appear in EDM (Electrical Discharge Machining). First refers to the introduction of a new function approximating the thermal flux energy in EDM machine. Classical researches approximate this energy with the Gauss' function. In the case of unconventional technology the Gauss' bell became null only for r → +∞, where r is the radius of crater produced by EDM. We introduce a cubic spline regression which descends to zero at the crater's boundary. In the second optimization we propose modifications in technologies' work regarding the displacement of the tool electrode to the piece electrode such that the material melting to be realized in optimal time and the feeding speed with dielectric liquid regarding the solidification of the expulsed material. This we realize using the FAHP algorithm based on the theory of eigenvalues and eigenvectors, which lead to mean values of best approximation. [6

  1. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  2. Finite-dimensional compensators for infinite-dimensional systems via Galerkin-type approximation

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi

    1990-01-01

    In this paper existence and construction of stabilizing compensators for linear time-invariant systems defined on Hilbert spaces are discussed. An existence result is established using Galkerin-type approximations in which independent basis elements are used instead of the complete set of eigenvectors. A design procedure based on approximate solutions of the optimal regulator and optimal observer via Galerkin-type approximation is given and the Schumacher approach is used to reduce the dimension of compensators. A detailed discussion for parabolic and hereditary differential systems is included.

  3. Systematic study of target localization for bioluminescence tomography guided radiation therapy

    PubMed Central

    Yu, Jingjing; Zhang, Bin; Iordachita, Iulian I.; Reyes, Juvenal; Lu, Zhihao; Brock, Malcolm V.; Patterson, Michael S.; Wong, John W.

    2016-01-01

    Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstruct source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models. PMID:27147371

  4. Nonlinear programming extensions to rational function approximations of unsteady aerodynamics

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1987-01-01

    This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.

  5. Parametric optimal control of uncertain systems under an optimistic value criterion

    NASA Astrophysics Data System (ADS)

    Li, Bo; Zhu, Yuanguo

    2018-01-01

    It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.

  6. Assessing the impact of heart failure specialist services on patient populations.

    PubMed

    Lyratzopoulos, Georgios; Cook, Gary A; McElduff, Patrick; Havely, Daniel; Edwards, Richard; Heller, Richard F

    2004-05-24

    The assessment of the impact of healthcare interventions may help commissioners of healthcare services to make optimal decisions. This can be particularly the case if the impact assessment relates to specific patient populations and uses timely local data. We examined the potential impact on readmissions and mortality of specialist heart failure services capable of delivering treatments such as b-blockers and Nurse-Led Educational Intervention (N-LEI). Statistical modelling of prevented or postponed events among previously hospitalised patients, using estimates of: treatment uptake and contraindications (based on local audit data); treatment effectiveness and intolerance (based on literature); and annual number of hospitalization per patient and annual risk of death (based on routine data). Optimal treatment uptake among eligible but untreated patients would over one year prevent or postpone 11% of all expected readmissions and 18% of all expected deaths for spironolactone, 13% of all expected readmisisons and 22% of all expected deaths for b-blockers (carvedilol) and 20% of all expected readmissions and an uncertain number of deaths for N-LEI. Optimal combined treatment uptake for all three interventions during one year among all eligible but untreated patients would prevent or postpone 37% of all expected readmissions and a minimum of 36% of all expected deaths. In a population of previously hospitalised patients with low previous uptake of b-blockers and no uptake of N-LEI, optimal combined uptake of interventions through specialist heart failure services can potentially help prevent or postpone approximately four times as many readmissions and a minimum of twice as many deaths compared with simply optimising uptake of spironolactone (not necessarily requiring specialist services). Examination of the impact of different heart failure interventions can inform rational planning of relevant healthcare services.

  7. Lens dose in routine head CT: comparison of different optimization methods with anthropomorphic phantoms.

    PubMed

    Nikupaavo, Ulla; Kaasalainen, Touko; Reijonen, Vappu; Ahonen, Sanna-Mari; Kortesniemi, Mika

    2015-01-01

    The purpose of this study was to study different optimization methods for reducing eye lens dose in head CT. Two anthropomorphic phantoms were scanned with a routine head CT protocol for evaluation of the brain that included bismuth shielding, gantry tilting, organ-based tube current modulation, or combinations of these techniques. Highsensitivity metal oxide semiconductor field effect transistor dosimeters were used to measure local equivalent doses in the head region. The relative changes in image noise and contrast were determined by ROI analysis. The mean absorbed lens doses varied from 4.9 to 19.7 mGy and from 10.8 to 16.9 mGy in the two phantoms. The most efficient method for reducing lens dose was gantry tilting, which left the lenses outside the primary radiation beam, resulting in an approximately 75% decrease in lens dose. Image noise decreased, especially in the anterior part of the brain. The use of organ-based tube current modulation resulted in an approximately 30% decrease in lens dose. However, image noise increased as much as 30% in the posterior and central parts of the brain. With bismuth shields, it was possible to reduce lens dose as much as 25%. Our results indicate that gantry tilt, when possible, is an effective method for reducing exposure of the eye lenses in CT of the brain without compromising image quality. Measurements in two different phantoms showed how patient geometry affects the optimization. When lenses can only partially be cropped outside the primary beam, organ-based tube current modulation or bismuth shields can be useful in lens dose reduction.

  8. Comparing the Financial Impact of Several Hospitals on Their Local Markets.

    PubMed

    Rotarius, Timothy; Liberman, Aaron

    Several studies that measured the financial impact of hospitals on their local markets are examined. Descriptive analyses were performed to ascertain if there are any identifying characteristics and emerging patterns in the data. After hospitals were categorized into small, medium, and large classifications based on the number of employees, various predictive insights were discovered. Smaller hospitals could be expected to contribute approximately 7.3% to the local economy, whereas medium-sized hospitals would likely contribute approximately 11.4% to the financial value of the local market. Finally, larger hospitals may contribute approximately 16% to their local economies.

  9. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  10. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package.

    PubMed

    Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-28

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  11. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package

    NASA Astrophysics Data System (ADS)

    Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-01

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  12. On the convergence of local approximations to pseudodifferential operators with applications

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1994-01-01

    We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.

  13. A Generalization of the Karush-Kuhn-Tucker Theorem for Approximate Solutions of Mathematical Programming Problems Based on Quadratic Approximation

    NASA Astrophysics Data System (ADS)

    Voloshinov, V. V.

    2018-03-01

    In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.

  14. The cost of local, multi-professional obstetric emergencies training.

    PubMed

    Yau, Christopher W H; Pizzo, Elena; Morris, Steve; Odd, David E; Winter, Cathy; Draycott, Timothy J

    2016-10-01

    We aim to outline the annual cost of setting up and running a standard, local, multi-professional obstetric emergencies training course, PROMPT (PRactical Obstetric Multi-Professional Training), at Southmead Hospital, Bristol, UK - a unit caring for approximately 6500 births per year. A retrospective, micro-costing analysis was performed. Start-up costs included purchasing training mannequins and teaching props, printing of training materials and assembly of emergency boxes (real and training). The variable costs included administration time, room hire, additional printing and the cost of releasing all maternity staff in the unit, either as attendees or trainers. Potential, extra start-up costs for maternity units without established training were also included. The start-up costs were €5574 and the variable costs for 1 year were €143 232. The total cost of establishing and running training at Southmead for 1 year was €148 806. Releasing staff as attendees or trainers accounted for 89% of the total first year costs, and 92% of the variable costs. The cost of running training in a maternity unit with around 6500 births per year was approximately €23 000 per 1000 births for the first year and around €22 000 per 1000 births in subsequent years. The cost of local, multi-professional obstetric emergencies training is not cheap, with staff costs potentially representing over 90% of the total expenditure. It is therefore vital that organizations consider the clinical effectiveness of local training packages before implementing them, to ensure the optimal allocation of finite healthcare budgets. © 2016 Nordic Federation of Societies of Obstetrics and Gynecology.

  15. Efficient and Adaptive Methods for Computing Accurate Potential Surfaces for Quantum Nuclear Effects: Applications to Hydrogen-Transfer Reactions.

    PubMed

    DeGregorio, Nicole; Iyengar, Srinivasan S

    2018-01-09

    We present two sampling measures to gauge critical regions of potential energy surfaces. These sampling measures employ (a) the instantaneous quantum wavepacket density, an approximation to the (b) potential surface, its (c) gradients, and (d) a Shannon information theory based expression that estimates the local entropy associated with the quantum wavepacket. These four criteria together enable a directed sampling of potential surfaces that appears to correctly describe the local oscillation frequencies, or the local Nyquist frequency, of a potential surface. The sampling functions are then utilized to derive a tessellation scheme that discretizes the multidimensional space to enable efficient sampling of potential surfaces. The sampled potential surface is then combined with four different interpolation procedures, namely, (a) local Hermite curve interpolation, (b) low-pass filtered Lagrange interpolation, (c) the monomial symmetrization approximation (MSA) developed by Bowman and co-workers, and (d) a modified Shepard algorithm. The sampling procedure and the fitting schemes are used to compute (a) potential surfaces in highly anharmonic hydrogen-bonded systems and (b) study hydrogen-transfer reactions in biogenic volatile organic compounds (isoprene) where the transferring hydrogen atom is found to demonstrate critical quantum nuclear effects. In the case of isoprene, the algorithm discussed here is used to derive multidimensional potential surfaces along a hydrogen-transfer reaction path to gauge the effect of quantum-nuclear degrees of freedom on the hydrogen-transfer process. Based on the decreased computational effort, facilitated by the optimal sampling of the potential surfaces through the use of sampling functions discussed here, and the accuracy of the associated potential surfaces, we believe the method will find great utility in the study of quantum nuclear dynamics problems, of which application to hydrogen-transfer reactions and hydrogen-bonded systems is demonstrated here.

  16. A small perturbation based optimization approach for the frequency placement of high aspect ratio wings

    NASA Astrophysics Data System (ADS)

    Goltsch, Mandy

    Design denotes the transformation of an identified need to its physical embodiment in a traditionally iterative approach of trial and error. Conceptual design plays a prominent role but an almost infinite number of possible solutions at the outset of design necessitates fast evaluations. The corresponding practice of empirical equations and low fidelity analyses becomes obsolete in the light of novel concepts. Ever increasing system complexity and resource scarcity mandate new approaches to adequately capture system characteristics. Contemporary concerns in atmospheric science and homeland security created an operational need for unconventional configurations. Unmanned long endurance flight at high altitudes offers a unique showcase for the exploration of new design spaces and the incidental deficit of conceptual modeling and simulation capabilities. Structural and aerodynamic performance requirements necessitate light weight materials and high aspect ratio wings resulting in distinct structural and aeroelastic response characteristics that stand in close correlation with natural vibration modes. The present research effort evolves around the development of an efficient and accurate optimization algorithm for high aspect ratio wings subject to natural frequency constraints. Foundational corner stones are beam dimensional reduction and modal perturbation redesign. Local and global analyses inherent to the former suggest corresponding levels of local and global optimization. The present approach departs from this suggestion. It introduces local level surrogate models to capacitate a methodology that consists of multi level analyses feeding into a single level optimization. The innovative heart of the new algorithm originates in small perturbation theory. A sequence of small perturbation solutions allows the optimizer to make incremental movements within the design space. It enables a directed search that is free of costly gradients. System matrices are decomposed based on a Timoshenko stiffness effect separation. The formulation of respective linear changes falls back on surrogate models that approximate cross sectional properties. Corresponding functional responses are readily available. Their direct use by the small perturbation based optimizer ensures constitutive laws and eliminates a previously necessary optimization at the local level. The scope of the present work is derived from an existing configuration such as a conceptual baseline or a prototype that experiences aeroelastic instabilities. Due to the lack of respective design studies in the traditional design process it is not uncommon for an initial wing design to have such stability problems. The developed optimization scheme allows the effective redesign of high aspect ratio wings subject to natural frequency objectives. Its successful application is demonstrated by three separate optimization studies. The implementation results of all three studies confirm that the gradient liberation of the new methodology brings about great computational savings. A generic wing study is used to indicate the connection between the proposed methodology and the aeroelastic stability problems outlined in the motivation. It is also used to illustrate an important practical aspect of structural redesign, i.e., a minimum departure from the existing baseline configuration. The proposed optimization scheme is naturally conducive to this practical aspect by using a minimum change optimization criterion. However, only an elemental formulation truly enables a minimum change solution. It accounts for the spanwise significance of a structural modification to the mode of interest. This idea of localized reinforcement greatly benefits the practical realization of structural redesign efforts. The implementation results also highlight the fundamental limitation of the proposed methodology. The exclusive consideration of mass and stiffness effects on modal response characteristics disregards other disciplinary problems such as allowable stresses or buckling loads. Both are of central importance to the structural integrity of an aircraft but are currently not accounted for in the proposed optimization scheme. The concluding discussion thus outlines the need for respective constraints and/or additional analyses to capture all requirements necessary for a comprehensive structural redesign study.

  17. A well-posed optimal spectral element approximation for the Stokes problem

    NASA Technical Reports Server (NTRS)

    Maday, Y.; Patera, A. T.; Ronquist, E. M.

    1987-01-01

    A method is proposed for the spectral element simulation of incompressible flow. This method constitutes in a well-posed optimal approximation of the steady Stokes problem with no spurious modes in the pressure. The resulting method is analyzed, and numerical results are presented for a model problem.

  18. Profile shape optimization in multi-jet impingement cooling of dimpled topologies for local heat transfer enhancement

    NASA Astrophysics Data System (ADS)

    Negi, Deepchand Singh; Pattamatta, Arvind

    2015-04-01

    The present study deals with shape optimization of dimples on the target surface in multi-jet impingement heat transfer. Bezier polynomial formulation is incorporated to generate profile shapes for the dimple profile generation and a multi-objective optimization is performed. The optimized dimple shape exhibits higher local Nusselt number values compared to the reference hemispherical dimpled plate optimized shape which can be used to alleviate local temperature hot spots on target surface.

  19. A temperature-based feedback control system for electromagnetic phased-array hyperthermia: theory and simulation.

    PubMed

    Kowalski, M E; Jin, J M

    2003-03-07

    A hybrid proportional-integral-in-time and cost-minimizing-in-space feedback control system for electromagnetic, deep regional hyperthermia is proposed. The unique features of this controller are that (1) it uses temperature, not specific absorption rate, as the criterion for selecting the relative phases and amplitudes with which to drive the electromagnetic phased-array used for hyperthermia and (2) it requires on-line computations that are all deterministic in duration. The former feature, in addition to optimizing the treatment directly on the basis of a clinically relevant quantity, also allows the controller to sense and react to time- and temperature-dependent changes in local blood perfusion rates and other factors that can significantly impact the temperature distribution quality of the delivered treatment. The latter feature makes it feasible to implement the scheme on-line in a real-time feedback control loop. This is in sharp contrast to other temperature optimization techniques proposed in the literature that generally involve an iterative approximation that cannot be guaranteed to terminate in a fixed amount of computational time. An example of its application is presented to illustrate the properties and demonstrate the capability of the controller to sense and compensate for local, time-dependent changes in blood perfusion rates.

  20. Divergence in cryptic leaf colour provides local camouflage in an alpine plant.

    PubMed

    Niu, Yang; Chen, Zhe; Stevens, Martin; Sun, Hang

    2017-10-11

    The efficacy of camouflage through background matching is highly environment-dependent, often resulting in intraspecific colour divergence in animals to optimize crypsis in different visual environments. This phenomenon is largely unexplored in plants, although several lines of evidence suggest they do use crypsis to avoid damage by herbivores. Using Corydalis hemidicentra, an alpine plant with cryptic leaf colour, we quantified background matching between leaves and surrounding rocks in five populations based on an approximate model of their butterfly enemy's colour perception. We also investigated the pigment basis of leaf colour variation and the association between feeding risk and camouflage efficacy. We show that plants exhibit remarkable colour divergence between populations, consistent with differences in rock appearances. Leaf colour varies because of a different quantitative combination of two basic pigments-chlorophyll and anthocyanin-plus different air spaces. As expected, leaf colours are better matched against their native backgrounds than against foreign ones in the eyes of the butterfly. Furthermore, improved crypsis tends to be associated with a higher level of feeding risk. These results suggest that divergent cryptic leaf colour may have evolved to optimize local camouflage in various visual environments, extending our understanding of colour evolution and intraspecific phenotype diversity in plants. © 2017 The Author(s).

  1. Statistical significance approximation in local trend analysis of high-throughput time-series data using the theory of Markov chains.

    PubMed

    Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu

    2015-09-21

    Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.

  2. Synthesizing epidemiological and economic optima for control of immunizing infections.

    PubMed

    Klepac, Petra; Laxminarayan, Ramanan; Grenfell, Bryan T

    2011-08-23

    Epidemic theory predicts that the vaccination threshold required to interrupt local transmission of an immunizing infection like measles depends only on the basic reproductive number and hence transmission rates. When the search for optimal strategies is expanded to incorporate economic constraints, the optimum for disease control in a single population is determined by relative costs of infection and control, rather than transmission rates. Adding a spatial dimension, which precludes local elimination unless it can be achieved globally, can reduce or increase optimal vaccination levels depending on the balance of costs and benefits. For weakly coupled populations, local optimal strategies agree with the global cost-effective strategy; however, asymmetries in costs can lead to divergent control optima in more strongly coupled systems--in particular, strong regional differences in costs of vaccination can preclude local elimination even when elimination is locally optimal. Under certain conditions, it is locally optimal to share vaccination resources with other populations.

  3. Quantum algorithm for energy matching in hard optimization problems

    NASA Astrophysics Data System (ADS)

    Baldwin, C. L.; Laumann, C. R.

    2018-06-01

    We consider the ability of local quantum dynamics to solve the "energy-matching" problem: given an instance of a classical optimization problem and a low-energy state, find another macroscopically distinct low-energy state. Energy matching is difficult in rugged optimization landscapes, as the given state provides little information about the distant topography. Here, we show that the introduction of quantum dynamics can provide a speedup over classical algorithms in a large class of hard optimization problems. Tunneling allows the system to explore the optimization landscape while approximately conserving the classical energy, even in the presence of large barriers. Specifically, we study energy matching in the random p -spin model of spin-glass theory. Using perturbation theory and exact diagonalization, we show that introducing a transverse field leads to three sharp dynamical phases, only one of which solves the matching problem: (1) a small-field "trapped" phase, in which tunneling is too weak for the system to escape the vicinity of the initial state; (2) a large-field "excited" phase, in which the field excites the system into high-energy states, effectively forgetting the initial energy; and (3) the intermediate "tunneling" phase, in which the system succeeds at energy matching. The rate at which distant states are found in the tunneling phase, although exponentially slow in system size, is exponentially faster than classical search algorithms.

  4. Real-World Application of Robust Design Optimization Assisted by Response Surface Approximation and Visual Data-Mining

    NASA Astrophysics Data System (ADS)

    Shimoyama, Koji; Jeong, Shinkyu; Obayashi, Shigeru

    A new approach for multi-objective robust design optimization was proposed and applied to a real-world design problem with a large number of objective functions. The present approach is assisted by response surface approximation and visual data-mining, and resulted in two major gains regarding computational time and data interpretation. The Kriging model for response surface approximation can markedly reduce the computational time for predictions of robustness. In addition, the use of self-organizing maps as a data-mining technique allows visualization of complicated design information between optimality and robustness in a comprehensible two-dimensional form. Therefore, the extraction and interpretation of trade-off relations between optimality and robustness of design, and also the location of sweet spots in the design space, can be performed in a comprehensive manner.

  5. Optimal Power Flow for Distribution Systems under Uncertain Forecasts: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    2016-12-01

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  6. Structural physical approximation for the realization of the optimal singlet fraction with two measurements

    NASA Astrophysics Data System (ADS)

    Adhikari, Satyabrata

    2018-04-01

    Structural physical approximation (SPA) has been exploited to approximate nonphysical operation such as partial transpose. It has already been studied in the context of detection of entanglement and found that if the minimum eigenvalue of SPA to partial transpose is less than 2/9 then the two-qubit state is entangled. We find application of SPA to partial transpose in the estimation of the optimal singlet fraction. We show that the optimal singlet fraction can be expressed in terms of the minimum eigenvalue of SPA to partial transpose. We also show that the optimal singlet fraction can be realized using Hong-Ou-Mandel interferometry with only two detectors. Further we have shown that the generated hybrid entangled state between a qubit and a binary coherent state can be used as a resource state in quantum teleportation.

  7. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    PubMed

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  8. Minimal-Approximation-Based Distributed Consensus Tracking of a Class of Uncertain Nonlinear Multiagent Systems With Unknown Control Directions.

    PubMed

    Choi, Yun Ho; Yoo, Sung Jin

    2017-03-28

    A minimal-approximation-based distributed adaptive consensus tracking approach is presented for strict-feedback multiagent systems with unknown heterogeneous nonlinearities and control directions under a directed network. Existing approximation-based consensus results for uncertain nonlinear multiagent systems in lower-triangular form have used multiple function approximators in each local controller to approximate unmatched nonlinearities of each follower. Thus, as the follower's order increases, the number of the approximators used in its local controller increases. However, the proposed approach employs only one function approximator to construct the local controller of each follower regardless of the order of the follower. The recursive design methodology using a new error transformation is derived for the proposed minimal-approximation-based design. Furthermore, a bounding lemma on parameters of Nussbaum functions is presented to handle the unknown control direction problem in the minimal-approximation-based distributed consensus tracking framework and the stability of the overall closed-loop system is rigorously analyzed in the Lyapunov sense.

  9. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE PAGES

    Sosa Vazquez, Xochitl A.; Isborn, Christine M.

    2015-12-22

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. As a result, in vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  10. Size-dependent error of the density functional theory ionization potential in vacuum and solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sosa Vazquez, Xochitl A.; Isborn, Christine M., E-mail: cisborn@ucmerced.edu

    2015-12-28

    Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potentialmore » for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. In vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.« less

  11. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    PubMed

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  12. Two-parametric {\\delta'} -interactions: approximation by Schrödinger operators with localized rank-two perturbations

    NASA Astrophysics Data System (ADS)

    Golovaty, Yuriy

    2018-06-01

    We construct a norm resolvent approximation to the family of point interactions , by Schrödinger operators with localized rank-two perturbations coupled with short range potentials. In particular, a new approximation to the -interactions is obtained.

  13. On the possibility of control restoration in some inverse problems of heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2016-11-01

    The hypersonic aircraft permeable surfaces effective heat protection problems are considered. The physic-chemical processes (the dissociation and the ionization) in laminar boundary layer of compressible gas are appreciated in mathematical model. The statements of direct problems of heat and mass transfer are given: according to preset given controls it is necessary to compute the boundary layer mathematical model parameters and determinate the local and total heat flows and friction forces and the power of blowing system. The A.A.Dorodnicyn's generalized integral relations method has been used as calculation basis. The optimal control - the blowing into boundary layer (for continuous functions) was constructed as the solution of direct problem in extreme statement with the use of this approach. The statement of inverse problems are given: the control laws ensuring the preset given local heat flow and local tangent friction are restored. The differences between the interpolation and the approximation statements are discussed. The possibility of unique control restoration is established and proved (in the stagnation point). The computational experiments results are presented.

  14. Fluorescence X-ray absorption spectroscopy using a Ge pixel array detector: application to high-temperature superconducting thin-film single crystals.

    PubMed

    Oyanagi, H; Tsukada, A; Naito, M; Saini, N L; Lampert, M O; Gutknecht, D; Dressler, P; Ogawa, S; Kasai, K; Mohamed, S; Fukano, A

    2006-07-01

    A Ge pixel array detector with 100 segments was applied to fluorescence X-ray absorption spectroscopy, probing the local structure of high-temperature superconducting thin-film single crystals (100 nm in thickness). Independent monitoring of pixel signals allows real-time inspection of artifacts owing to substrate diffractions. By optimizing the grazing-incidence angle theta and adjusting the azimuthal angle phi, smooth extended X-ray absorption fine structure (EXAFS) oscillations were obtained for strained (La,Sr)2CuO4 thin-film single crystals grown by molecular beam epitaxy. The results of EXAFS data analysis show that the local structure (CuO6 octahedron) in (La,Sr)2CuO4 thin films grown on LaSrAlO4 and SrTiO3 substrates is uniaxially distorted changing the tetragonality by approximately 5 x 10(-3) in accordance with the crystallographic lattice mismatch. It is demonstrated that the local structure of thin-film single crystals can be probed with high accuracy at low temperature without interference from substrates.

  15. Exact and Approximate Stability of Solutions to Traveling Salesman Problems.

    PubMed

    Niendorf, Moritz; Girard, Anouck R

    2018-02-01

    This paper presents the stability analysis of an optimal tour for the symmetric traveling salesman problem (TSP) by obtaining stability regions. The stability region of an optimal tour is the set of all cost changes for which that solution remains optimal and can be understood as the margin of optimality for a solution with respect to perturbations in the problem data. It is known that it is not possible to test in polynomial time whether an optimal tour remains optimal after the cost of an arbitrary set of edges changes. Therefore, this paper develops tractable methods to obtain under and over approximations of stability regions based on neighborhoods and relaxations. The application of the results to the two-neighborhood and the minimum 1 tree (M1T) relaxation are discussed in detail. For Euclidean TSPs, stability regions with respect to vertex location perturbations and the notion of safe radii and location criticalities are introduced. Benefits of this paper include insight into robustness properties of tours, minimum spanning trees, M1Ts, and fast methods to evaluate optimality after perturbations occur. Numerical examples are given to demonstrate the methods and achievable approximation quality.

  16. Local thermodynamic mapping for effective liquid density-functional theory

    NASA Technical Reports Server (NTRS)

    Kyrlidis, Agathagelos; Brown, Robert A.

    1992-01-01

    The structural-mapping approximation introduced by Lutsko and Baus (1990) in the generalized effective-liquid approximation is extended to include a local thermodynamic mapping based on a spatially dependent effective density for approximating the solid phase in terms of the uniform liquid. This latter approximation, called the local generalized effective-liquid approximation (LGELA) yields excellent predictions for the free energy of hard-sphere solids and for the conditions of coexistence of a hard-sphere fcc solid with a liquid. Moreover, the predicted free energy remains single valued for calculations with more loosely packed crystalline structures, such as the diamond lattice. The spatial dependence of the weighted density makes the LGELA useful in the study of inhomogeneous solids.

  17. Optimal partitioning of random programs across two processors

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.

  18. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    DOE PAGES

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-10

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less

  19. Cascade Optimization Strategy with Neural Network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Patnaik, Surya N.

    2000-01-01

    A preliminary aircraft engine design methodology is being developed that utilizes a cascade optimization strategy together with neural network and regression approximation methods. The cascade strategy employs different optimization algorithms in a specified sequence. The neural network and regression methods are used to approximate solutions obtained from the NASA Engine Performance Program (NEPP), which implements engine thermodynamic cycle and performance analysis models. The new methodology is proving to be more robust and computationally efficient than the conventional optimization approach of using a single optimization algorithm with direct reanalysis. The methodology has been demonstrated on a preliminary design problem for a novel subsonic turbofan engine concept that incorporates a wave rotor as a cycle-topping device. Computations of maximum thrust were obtained for a specific design point in the engine mission profile. The results (depicted in the figure) show a significant improvement in the maximum thrust obtained using the new methodology in comparison to benchmark solutions obtained using NEPP in a manual design mode.

  20. Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2009-01-01

    An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.

  1. Design Process for High Speed Civil Transport Aircraft Improved by Neural Network and Regression Methods

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.

    1998-01-01

    A key challenge in designing the new High Speed Civil Transport (HSCT) aircraft is determining a good match between the airframe and engine. Multidisciplinary design optimization can be used to solve the problem by adjusting parameters of both the engine and the airframe. Earlier, an example problem was presented of an HSCT aircraft with four mixed-flow turbofan engines and a baseline mission to carry 305 passengers 5000 nautical miles at a cruise speed of Mach 2.4. The problem was solved by coupling NASA Lewis Research Center's design optimization testbed (COMETBOARDS) with NASA Langley Research Center's Flight Optimization System (FLOPS). The computing time expended in solving the problem was substantial, and the instability of the FLOPS analyzer at certain design points caused difficulties. In an attempt to alleviate both of these limitations, we explored the use of two approximation concepts in the design optimization process. The two concepts, which are based on neural network and linear regression approximation, provide the reanalysis capability and design sensitivity analysis information required for the optimization process. The HSCT aircraft optimization problem was solved by using three alternate approaches; that is, the original FLOPS analyzer and two approximate (derived) analyzers. The approximate analyzers were calibrated and used in three different ranges of the design variables; narrow (interpolated), standard, and wide (extrapolated).

  2. Approximate optimal tracking control for near-surface AUVs with wave disturbances

    NASA Astrophysics Data System (ADS)

    Yang, Qing; Su, Hao; Tang, Gongyou

    2016-10-01

    This paper considers the optimal trajectory tracking control problem for near-surface autonomous underwater vehicles (AUVs) in the presence of wave disturbances. An approximate optimal tracking control (AOTC) approach is proposed. Firstly, a six-degrees-of-freedom (six-DOF) AUV model with its body-fixed coordinate system is decoupled and simplified and then a nonlinear control model of AUVs in the vertical plane is given. Also, an exosystem model of wave disturbances is constructed based on Hirom approximation formula. Secondly, the time-parameterized desired trajectory which is tracked by the AUV's system is represented by the exosystem. Then, the coupled two-point boundary value (TPBV) problem of optimal tracking control for AUVs is derived from the theory of quadratic optimal control. By using a recently developed successive approximation approach to construct sequences, the coupled TPBV problem is transformed into a problem of solving two decoupled linear differential sequences of state vectors and adjoint vectors. By iteratively solving the two equation sequences, the AOTC law is obtained, which consists of a nonlinear optimal feedback item, an expected output tracking item, a feedforward disturbances rejection item, and a nonlinear compensatory term. Furthermore, a wave disturbances observer model is designed in order to solve the physically realizable problem. Simulation is carried out by using the Remote Environmental Unit (REMUS) AUV model to demonstrate the effectiveness of the proposed algorithm.

  3. Application of ab initio many-body perturbation theory with Gaussian basis sets to the singlet and triplet excitations of organic molecules

    NASA Astrophysics Data System (ADS)

    Hamed, Samia; Rangel, Tonatiuh; Bruneval, Fabien; Neaton, Jeffrey B.

    Quantitative understanding of charged and neutral excitations of organic molecules is critical in diverse areas of study that include astrophysics and the development of energy technologies that are clean and efficient. The recent use of local basis sets with ab initio many-body perturbation theory in the GW approximation and the Bethe-Saltpeter equation approach (BSE), methods traditionally applied to periodic condensed phases with a plane-wave basis, has opened the door to detailed study of such excitations for molecules, as well as accurate numerical benchmarks. Here, through a series of systematic benchmarks with a Gaussian basis, we report on the extent to which the predictive power and utility of this approach depend critically on interdependent underlying approximations and choices for molecules, including the mean-field starting point (eg optimally-tuned range separated hybrids, pure DFT functionals, and untuned hybrids), the GW scheme, and the Tamm Dancoff approximation. We demonstrate the effects of these choices in the context of Thiels' set while drawing analogies to linear-response time-dependent DFT and making comparisons to best theoretical estimates from higher-order wavefunction-based theories.

  4. Localization accuracy of sphere fiducials in computed tomography images

    NASA Astrophysics Data System (ADS)

    Kobler, Jan-Philipp; Díaz Díaz, Jesus; Fitzpatrick, J. Michael; Lexow, G. Jakob; Majdani, Omid; Ortmaier, Tobias

    2014-03-01

    In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 μm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.

  5. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1987-01-01

    Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.

  6. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-09-15

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.

  7. Structural tailoring of counter rotation propfans

    NASA Technical Reports Server (NTRS)

    Brown, Kenneth W.; Hopkins, D. A.

    1989-01-01

    The STAT program was designed for the optimization of single rotation, tractor propfan designs. New propfan designs, however, generally consist of two counter rotating propfan rotors. STAT is constructed to contain two levels of analysis. An interior loop, consisting of accurate, efficient approximate analyses, is used to perform the primary propfan optimization. Once an optimum design has been obtained, a series of refined analyses are conducted. These analyses, while too computer time expensive for the optimization loop, are of sufficient accuracy to validate the optimized design. Should the design prove to be unacceptable, provisions are made for recalibration of the approximate analyses, for subsequent reoptimization.

  8. Optimal Chebyshev polynomials on ellipses in the complex plane

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland

    1989-01-01

    The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.

  9. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less

  10. Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.

    PubMed

    Talaei, Behzad; Jagannathan, Sarangapani; Singler, John

    2018-04-01

    This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.

  11. Ab-initio study on electronic properties of rocksalt SnAs

    NASA Astrophysics Data System (ADS)

    Babariya, Bindiya; Vaghela, M. V.; Gajjar, P. N.

    2018-05-01

    Within the frame work of Local Density Approximation of Exchange and Correlation, ab-initio method of density functional theory with Abinit code is used to compute electronic energy band structure, density of States and charge density of SnAs in rocksalt phase. Our result after optimization for lattice constant agrees with experimental value within 0.59% deviation. The computed electronic energy bands in high symmetry directions Γ→K→X→Γ→L→X→W→L→U shown metallic nature. The lowest band in the electronic band structure is showing band-gap approximately 1.70 eV from next higher band and no crossing between lowest two bands are seen. The density of states revels p-p orbit hybridization between Sn and As atoms. The spherical contour around Sn and As in the charge density plot represent partly ionic and partly covalent bonding. Fermi surface topology is the resultant effect of the single band crossing along L direction at Ef.

  12. Two time scale output feedback regulation for ill-conditioned systems

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Moerder, D. D.

    1986-01-01

    Issues pertaining to the well-posedness of a two time scale approach to the output feedback regulator design problem are examined. An approximate quadratic performance index which reflects a two time scale decomposition of the system dynamics is developed. It is shown that, under mild assumptions, minimization of this cost leads to feedback gains providing a second-order approximation of optimal full system performance. A simplified approach to two time scale feedback design is also developed, in which gains are separately calculated to stabilize the slow and fast subsystem models. By exploiting the notion of combined control and observation spillover suppression, conditions are derived assuring that these gains will stabilize the full-order system. A sequential numerical algorithm is described which obtains output feedback gains minimizing a broad class of performance indices, including the standard LQ case. It is shown that the algorithm converges to a local minimum under nonrestrictive assumptions. This procedure is adapted to and demonstrated for the two time scale design formulations.

  13. Simulative design and process optimization of the two-stage stretch-blow molding process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-05-22

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development timemore » and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.« less

  14. Performance of local optimization in single-plane fluoroscopic analysis for total knee arthroplasty.

    PubMed

    Prins, A H; Kaptein, B L; Stoel, B C; Lahaye, D J P; Valstar, E R

    2015-11-05

    Fluoroscopy-derived joint kinematics plays an important role in the evaluation of knee prostheses. Fluoroscopic analysis requires estimation of the 3D prosthesis pose from its 2D silhouette in the fluoroscopic image, by optimizing a dissimilarity measure. Currently, extensive user-interaction is needed, which makes analysis labor-intensive and operator-dependent. The aim of this study was to review five optimization methods for 3D pose estimation and to assess their performance in finding the correct solution. Two derivative-free optimizers (DHSAnn and IIPM) and three gradient-based optimizers (LevMar, DoNLP2 and IpOpt) were evaluated. For the latter three optimizers two different implementations were evaluated: one with a numerically approximated gradient and one with an analytically derived gradient for computational efficiency. On phantom data, all methods were able to find the 3D pose within 1mm and 1° in more than 85% of cases. IpOpt had the highest success-rate: 97%. On clinical data, the success rates were higher than 85% for the in-plane positions, but not for the rotations. IpOpt was the most expensive method and the application of an analytically derived gradients accelerated the gradient-based methods by a factor 3-4 without any differences in success rate. In conclusion, 85% of the frames can be analyzed automatically in clinical data and only 15% of the frames require manual supervision. The optimal success-rate on phantom data (97% with IpOpt) on phantom data indicates that even less supervision may become feasible. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Simulative design and process optimization of the two-stage stretch-blow molding process

    NASA Astrophysics Data System (ADS)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-05-01

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.

  16. Finite-dimensional approximation for optimal fixed-order compensation of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Rosen, I. G.

    1988-01-01

    In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.

  17. Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DallAnese, Emiliano; Baker, Kyri; Summers, Tyler

    This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less

  18. Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method.

    PubMed

    Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong

    2011-12-01

    In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.

  19. Approximating the 0-1 Multiple Knapsack Problem with Agent Decomposition and Market Negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smolinski, B.

    The 0-1 multiple knapsack problem appears in many domains from financial portfolio management to cargo ship stowing. Methods for solving it range from approximate algorithms, such as greedy algorithms, to exact algorithms, such as branch and bound. Approximate algorithms have no bounds on how poorly they perform and exact algorithms can suffer from exponential time and space complexities with large data sets. This paper introduces a market model based on agent decomposition and market auctions for approximating the 0-1 multiple knapsack problem, and an algorithm that implements the model (M(x)). M(x) traverses the solution space rather than getting caught inmore » a local maximum, overcoming an inherent problem of many greedy algorithms. The use of agents ensures that infeasible solutions are not considered while traversing the solution space and that traversal of the solution space is not just random, but is also directed. M(x) is compared to a bound and bound algorithm (BB) and a simple greedy algorithm with a random shuffle (G(x)). The results suggest that M(x) is a good algorithm for approximating the 0-1 Multiple Knapsack problem. M(x) almost always found solutions that were close to optimal in a fraction of the time it took BB to run and with much less memory on large test data sets. M(x) usually performed better than G(x) on hard problems with correlated data.« less

  20. DFT study of CdS-PVA film

    NASA Astrophysics Data System (ADS)

    Bala, Vaneeta; Tripathi, S. K.; Kumar, Ranjan

    2015-02-01

    Density functional theory has been applied to study cadmium sulphide-polyvinyl alcohol nanocomposite film. Structural models of two isotactic-polyvinyl alcohol (I-PVA) chains around one cadmium sulphide nanoparticle is considered in which each chain consists three monomer units of [-(CH2CH(OH))-]. All of the hydroxyl groups in I-PVA chains are directed to cadmium sulphide nanoparticle. Electronic and structural properties are investigated using ab-intio density functional code, SIESTA. Structural optimizations are done using local density approximations (LDA). The exchange correlation functional of LDA is parameterized by the Ceperley-Alder (CA) approach. The core electrons are represented by improved Troulier-Martins pseudopotentials. Densities of states clearly show the semiconducting nature of cadmium sulphide polyvinyl alcohol nanocomposite.

  1. Optimal Use of Combined Modality Therapy in the Treatment of Esophageal Cancer.

    PubMed

    Shaikh, Talha; Meyer, Joshua E; Horwitz, Eric M

    2017-07-01

    Esophageal cancer is associated with a poor prognosis with 5-year survival rates of approximately 15% to 20%. Although patients with early stage disease may adequately be treated with a single modality, combined therapy typically consisting of neoadjuvant chemoradiation followed by esophagectomy is being adopted increasingly in patients with locally advanced disease. In patients who are not surgical candidates, definitive chemoradiation is the preferred treatment approach. All patients with newly diagnosed esophageal cancer should be evaluated in the multidisciplinary setting by a surgeon, radiation oncologist, and medical oncologist owing to the importance of each specialty in the management of these patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Approximation of discrete-time LQG compensators for distributed systems with boundary input and unbounded measurement

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1987-01-01

    The approximation of optimal discrete-time linear quadratic Gaussian (LQG) compensators for distributed parameter control systems with boundary input and unbounded measurement is considered. The approach applies to a wide range of problems that can be formulated in a state space on which both the discrete-time input and output operators are continuous. Approximating compensators are obtained via application of the LQG theory and associated approximation results for infinite dimensional discrete-time control systems with bounded input and output. Numerical results for spline and modal based approximation schemes used to compute optimal compensators for a one dimensional heat equation with either Neumann or Dirichlet boundary control and pointwise measurement of temperature are presented and discussed.

  3. Design of Distributed Controllers Seeking Optimal Power Flow Solutions Under Communication Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltagemore » measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.« less

  4. Design of Distributed Controllers Seeking Optimal Power Flow Solutions under Communication Constraints: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltagemore » measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.« less

  5. Masking Strategies for Image Manifolds.

    PubMed

    Dadkhahi, Hamid; Duarte, Marco F

    2016-07-07

    We consider the problem of selecting an optimal mask for an image manifold, i.e., choosing a subset of the pixels of the image that preserves the manifold's geometric structure present in the original data. Such masking implements a form of compressive sensing through emerging imaging sensor platforms for which the power expense grows with the number of pixels acquired. Our goal is for the manifold learned from masked images to resemble its full image counterpart as closely as possible. More precisely, we show that one can indeed accurately learn an image manifold without having to consider a large majority of the image pixels. In doing so, we consider two masking methods that preserve the local and global geometric structure of the manifold, respectively. In each case, the process of finding the optimal masking pattern can be cast as a binary integer program, which is computationally expensive but can be approximated by a fast greedy algorithm. Numerical experiments show that the relevant manifold structure is preserved through the datadependent masking process, even for modest mask sizes.

  6. Scheduling Non-Preemptible Jobs to Minimize Peak Demand

    DOE PAGES

    Yaw, Sean; Mumey, Brendan

    2017-10-28

    Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less

  7. Modeling the Economic Feasibility of Large-Scale Net-Zero Water Management: A Case Study.

    PubMed

    Guo, Tianjiao; Englehardt, James D; Fallon, Howard J

      While municipal direct potable water reuse (DPR) has been recommended for consideration by the U.S. National Research Council, it is unclear how to size new closed-loop DPR plants, termed "net-zero water (NZW) plants", to minimize cost and energy demand assuming upgradient water distribution. Based on a recent model optimizing the economics of plant scale for generalized conditions, the authors evaluated the feasibility and optimal scale of NZW plants for treatment capacity expansion in Miami-Dade County, Florida. Local data on population distribution and topography were input to compare projected costs for NZW vs the current plan. Total cost was minimized at a scale of 49 NZW plants for the service population of 671,823. Total unit cost for NZW systems, which mineralize chemical oxygen demand to below normal detection limits, is projected at ~$10.83 / 1000 gal, approximately 13% above the current plan and less than rates reported for several significant U.S. cities.

  8. Scheduling Non-Preemptible Jobs to Minimize Peak Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaw, Sean; Mumey, Brendan

    Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less

  9. Design of high-strength refractory complex solid-solution alloys

    DOE PAGES

    Singh, Prashant; Sharma, Aayush; Smirnov, A. V.; ...

    2018-03-28

    Nickel-based superalloys and near-equiatomic high-entropy alloys containing molybdenum are known for higher temperature strength and corrosion resistance. Yet, complex solid-solution alloys offer a huge design space to tune for optimal properties at slightly reduced entropy. For refractory Mo-W-Ta-Ti-Zr, we showcase KKR electronic structure methods via the coherent-potential approximation to identify alloys over five-dimensional design space with improved mechanical properties and necessary global (formation enthalpy) and local (short-range order) stability. Deformation is modeled with classical molecular dynamic simulations, validated from our first-principle data. We predict complex solid-solution alloys of improved stability with greatly enhanced modulus of elasticity (3× at 300 K)more » over near-equiatomic cases, as validated experimentally, and with higher moduli above 500 K over commercial alloys (2.3× at 2000 K). We also show that optimal complex solid-solution alloys are not described well by classical potentials due to critical electronic effects.« less

  10. Design of high-strength refractory complex solid-solution alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Prashant; Sharma, Aayush; Smirnov, A. V.

    Nickel-based superalloys and near-equiatomic high-entropy alloys containing molybdenum are known for higher temperature strength and corrosion resistance. Yet, complex solid-solution alloys offer a huge design space to tune for optimal properties at slightly reduced entropy. For refractory Mo-W-Ta-Ti-Zr, we showcase KKR electronic structure methods via the coherent-potential approximation to identify alloys over five-dimensional design space with improved mechanical properties and necessary global (formation enthalpy) and local (short-range order) stability. Deformation is modeled with classical molecular dynamic simulations, validated from our first-principle data. We predict complex solid-solution alloys of improved stability with greatly enhanced modulus of elasticity (3× at 300 K)more » over near-equiatomic cases, as validated experimentally, and with higher moduli above 500 K over commercial alloys (2.3× at 2000 K). We also show that optimal complex solid-solution alloys are not described well by classical potentials due to critical electronic effects.« less

  11. Cohesive energy and structural parameters of binary oxides of groups IIA and IIIB from diffusion quantum Monte Carlo

    DOE PAGES

    Santana, Juan A.; Krogel, Jaron T.; Kent, Paul R. C.; ...

    2016-05-03

    We have applied the diffusion quantum Monte Carlo (DMC) method to calculate the cohesive energy and the structural parameters of the binary oxides CaO, SrO, BaO, Sc 2O 3, Y 2O 3 and La 2O 3. The aim of our calculations is to systematically quantify the accuracy of the DMC method to study this type of metal oxides. The DMC results were compared with local and semi-local Density Functional Theory (DFT) approximations as well as with experimental measurements. The DMC method yields cohesive energies for these oxides with a mean absolute deviation from experimental measurements of 0.18(2) eV, while withmore » local and semi-local DFT approximations the deviation is 3.06 and 0.94 eV, respectively. For lattice constants, the mean absolute deviation in DMC, local and semi-local DFT approximations, are 0.017(1), 0.07 and 0.05 , respectively. In conclusion, DMC is highly accurate method, outperforming the local and semi-local DFT approximations in describing the cohesive energies and structural parameters of these binary oxides.« less

  12. Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.

    PubMed

    Choi, Yun Ho; Yoo, Sung Jin

    2016-12-01

    A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.

  13. Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?

    PubMed Central

    Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610

  14. Feedforward inhibition and synaptic scaling--two sides of the same coin?

    PubMed

    Keck, Christian; Savin, Cristina; Lücke, Jörg

    2012-01-01

    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.

  15. Evaluation of the Intel Xeon Phi 7120 and NVIDIA K80 as accelerators for two-dimensional panel codes

    PubMed Central

    2017-01-01

    To optimize the geometry of airfoils for a specific application is an important engineering problem. In this context genetic algorithms have enjoyed some success as they are able to explore the search space without getting stuck in local optima. However, these algorithms require the computation of aerodynamic properties for a significant number of airfoil geometries. Consequently, for low-speed aerodynamics, panel methods are most often used as the inner solver. In this paper we evaluate the performance of such an optimization algorithm on modern accelerators (more specifically, the Intel Xeon Phi 7120 and the NVIDIA K80). For that purpose, we have implemented an optimized version of the algorithm on the CPU and Xeon Phi (based on OpenMP, vectorization, and the Intel MKL library) and on the GPU (based on CUDA and the MAGMA library). We present timing results for all codes and discuss the similarities and differences between the three implementations. Overall, we observe a speedup of approximately 2.5 for adding an Intel Xeon Phi 7120 to a dual socket workstation and a speedup between 3.4 and 3.8 for adding a NVIDIA K80 to a dual socket workstation. PMID:28582389

  16. Evaluation of the Intel Xeon Phi 7120 and NVIDIA K80 as accelerators for two-dimensional panel codes.

    PubMed

    Einkemmer, Lukas

    2017-01-01

    To optimize the geometry of airfoils for a specific application is an important engineering problem. In this context genetic algorithms have enjoyed some success as they are able to explore the search space without getting stuck in local optima. However, these algorithms require the computation of aerodynamic properties for a significant number of airfoil geometries. Consequently, for low-speed aerodynamics, panel methods are most often used as the inner solver. In this paper we evaluate the performance of such an optimization algorithm on modern accelerators (more specifically, the Intel Xeon Phi 7120 and the NVIDIA K80). For that purpose, we have implemented an optimized version of the algorithm on the CPU and Xeon Phi (based on OpenMP, vectorization, and the Intel MKL library) and on the GPU (based on CUDA and the MAGMA library). We present timing results for all codes and discuss the similarities and differences between the three implementations. Overall, we observe a speedup of approximately 2.5 for adding an Intel Xeon Phi 7120 to a dual socket workstation and a speedup between 3.4 and 3.8 for adding a NVIDIA K80 to a dual socket workstation.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  18. Adaptive surrogate model based multiobjective optimization for coastal aquifer management

    NASA Astrophysics Data System (ADS)

    Song, Jian; Yang, Yun; Wu, Jianfeng; Wu, Jichun; Sun, Xiaomin; Lin, Jin

    2018-06-01

    In this study, a novel surrogate model assisted multiobjective memetic algorithm (SMOMA) is developed for optimal pumping strategies of large-scale coastal groundwater problems. The proposed SMOMA integrates an efficient data-driven surrogate model with an improved non-dominated sorted genetic algorithm-II (NSGAII) that employs a local search operator to accelerate its convergence in optimization. The surrogate model based on Kernel Extreme Learning Machine (KELM) is developed and evaluated as an approximate simulator to generate the patterns of regional groundwater flow and salinity levels in coastal aquifers for reducing huge computational burden. The KELM model is adaptively trained during evolutionary search to satisfy desired fidelity level of surrogate so that it inhibits error accumulation of forecasting and results in correctly converging to true Pareto-optimal front. The proposed methodology is then applied to a large-scale coastal aquifer management in Baldwin County, Alabama. Objectives of minimizing the saltwater mass increase and maximizing the total pumping rate in the coastal aquifers are considered. The optimal solutions achieved by the proposed adaptive surrogate model are compared against those solutions obtained from one-shot surrogate model and original simulation model. The adaptive surrogate model does not only improve the prediction accuracy of Pareto-optimal solutions compared with those by the one-shot surrogate model, but also maintains the equivalent quality of Pareto-optimal solutions compared with those by NSGAII coupled with original simulation model, while retaining the advantage of surrogate models in reducing computational burden up to 94% of time-saving. This study shows that the proposed methodology is a computationally efficient and promising tool for multiobjective optimizations of coastal aquifer managements.

  19. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  20. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  1. Systematic study of target localization for bioluminescence tomography guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jingjing; Zhang, Bin; Reyes, Juvenal

    Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstructmore » source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models.« less

  2. Influence of the Numerical Scheme on the Solution Quality of the SWE for Tsunami Numerical Codes: The Tohoku-Oki, 2011Example.

    NASA Astrophysics Data System (ADS)

    Reis, C.; Clain, S.; Figueiredo, J.; Baptista, M. A.; Miranda, J. M. A.

    2015-12-01

    Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.

  3. Acceleration techniques in the univariate Lipschitz global optimization

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela

    2016-10-01

    Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.

  4. Optimal resource states for local state discrimination

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Somshubhro; Halder, Saronath; Nathanson, Michael

    2018-02-01

    We study the problem of locally distinguishing pure quantum states using shared entanglement as a resource. For a given set of locally indistinguishable states, we define a resource state to be useful if it can enhance local distinguishability and optimal if it can distinguish the states as well as global measurements and is also minimal with respect to a partial ordering defined by entanglement and dimension. We present examples of useful resources and show that an entangled state need not be useful for distinguishing a given set of states. We obtain optimal resources with explicit local protocols to distinguish multipartite Greenberger-Horne-Zeilinger and graph states and also show that a maximally entangled state is an optimal resource under one-way local operations and classical communication to distinguish any bipartite orthonormal basis which contains at least one entangled state of full Schmidt rank.

  5. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  6. An approximation function for frequency constrained structural optimization

    NASA Technical Reports Server (NTRS)

    Canfield, R. A.

    1989-01-01

    The purpose is to examine a function for approximating natural frequency constraints during structural optimization. The nonlinearity of frequencies has posed a barrier to constructing approximations for frequency constraints of high enough quality to facilitate efficient solutions. A new function to represent frequency constraints, called the Rayleigh Quotient Approximation (RQA), is presented. Its ability to represent the actual frequency constraint results in stable convergence with effectively no move limits. The objective of the optimization problem is to minimize structural weight subject to some minimum (or maximum) allowable frequency and perhaps subject to other constraints such as stress, displacement, and gage size, as well. A reason for constraining natural frequencies during design might be to avoid potential resonant frequencies due to machinery or actuators on the structure. Another reason might be to satisy requirements of an aircraft or spacecraft's control law. Whatever the structure supports may be sensitive to a frequency band that must be avoided. Any of these situations or others may require the designer to insure the satisfaction of frequency constraints. A further motivation for considering accurate approximations of natural frequencies is that they are fundamental to dynamic response constraints.

  7. Streamflow Prediction based on Chaos Theory

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, X.; Babovic, V. M.

    2015-12-01

    Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.

  8. The use of kernel local Fisher discriminant analysis for the channelization of the Hotelling model observer

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.

    2015-03-01

    It is resource-intensive to conduct human studies for task-based assessment of medical image quality and system optimization. Thus, numerical model observers have been developed as a surrogate for human observers. The Hotelling observer (HO) is the optimal linear observer for signal-detection tasks, but the high dimensionality of imaging data results in a heavy computational burden. Channelization is often used to approximate the HO through a dimensionality reduction step, but how to produce channelized images without losing significant image information remains a key challenge. Kernel local Fisher discriminant analysis (KLFDA) uses kernel techniques to perform supervised dimensionality reduction, which finds an embedding transformation that maximizes betweenclass separability and preserves within-class local structure in the low-dimensional manifold. It is powerful for classification tasks, especially when the distribution of a class is multimodal. Such multimodality could be observed in many practical clinical tasks. For example, primary and metastatic lesions may both appear in medical imaging studies, but the distributions of their typical characteristics (e.g., size) may be very different. In this study, we propose to use KLFDA as a novel channelization method. The dimension of the embedded manifold (i.e., the result of KLFDA) is a counterpart to the number of channels in the state-of-art linear channelization. We present a simulation study to demonstrate the potential usefulness of KLFDA for building the channelized HOs (CHOs) and generating reliable decision statistics for clinical tasks. We show that the performance of the CHO with KLFDA channels is comparable to that of the benchmark CHOs.

  9. A local segmentation parameter optimization approach for mapping heterogeneous urban environments using VHR imagery

    NASA Astrophysics Data System (ADS)

    Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore

    2017-10-01

    Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.

  10. Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling

    NASA Technical Reports Server (NTRS)

    Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw

    2005-01-01

    The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.

  11. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  12. Localization-delocalization transition in a system of quantum kicked rotors.

    PubMed

    Creffield, C E; Hur, G; Monteiro, T S

    2006-01-20

    The quantum dynamics of atoms subjected to pairs of closely spaced delta kicks from optical potentials are shown to be quite different from the well-known paradigm of quantum chaos, the single delta-kick system. We find the unitary matrix has a new oscillating band structure corresponding to a cellular structure of phase space and observe a spectral signature of a localization-delocalization transition from one cell to several. We find that the eigenstates have localization lengths which scale with a fractional power L approximately h(-0.75) and obtain a regime of near-linear spectral variances which approximate the "critical statistics" relation summation2(L) approximately or equal to chi(L) approximately 1/2 (1-nu)L, where nu approximately 0.75 is related to the fractal classical phase-space structure. The origin of the nu approximately 0.75 exponent is analyzed.

  13. Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo

    DOE PAGES

    Kent, Paul R.; Krogel, Jaron T.

    2017-06-22

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  14. Online adaptive optimal control for continuous-time nonlinear systems with completely unknown dynamics

    NASA Astrophysics Data System (ADS)

    Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu

    2016-01-01

    An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.

  15. Approximate Locality for Quantum Systems on Graphs

    NASA Astrophysics Data System (ADS)

    Osborne, Tobias J.

    2008-10-01

    In this Letter we make progress on a long-standing open problem of Aaronson and Ambainis [Theory Comput. 1, 47 (2005)1557-2862]: we show that if U is a sparse unitary operator with a gap Δ in its spectrum, then there exists an approximate logarithm H of U which is also sparse. The sparsity pattern of H gets more dense as 1/Δ increases. This result can be interpreted as a way to convert between local continuous-time and local discrete-time quantum processes. As an example we show that the discrete-time coined quantum walk can be realized stroboscopically from an approximately local continuous-time quantum walk.

  16. The potential for lithoautotrophic life on Mars: application to shallow interfacial water environments.

    PubMed

    Jepsen, Steven M; Priscu, John C; Grimm, Robert E; Bullock, Mark A

    2007-04-01

    We developed a numerical model to assess the lithoautotrophic habitability of Mars based on metabolic energy, nutrients, water availability, and temperature. Available metabolic energy and nutrient sources were based on a laboratory-produced Mars-analog inorganic chemistry. For this specific reference chemistry, the most efficient lithoautotrophic microorganisms would use Fe(2+) as a primary metabolic electron donor and NO(3)(-) or gaseous O(2) as a terminal electron acceptor. In a closed model system, biomass production was limited by the electron donor Fe(2+) and metabolically required P, and typically amounted to approximately 800 pg of dry biomass/ml ( approximately 8,500 cells/ml). Continued growth requires propagation of microbes to new fecund environments, delivery of fresh pore fluid, or continued reaction with the host material. Within the shallow cryosphere--where oxygen can be accessed by microbes and microbes can be accessed by exploration-lithoautotrophs can function within as little as three monolayers of interfacial water formed either by adsorption from the atmosphere or in regions of ice stability where temperatures are within some tens of degrees of the ice melting point. For the selected reference host material (shergottite analog) and associated inorganic fluid chemistry, complete local reaction of the host material potentially yields a time-integrated biomass of approximately 0.1 mg of dry biomass/g of host material ( approximately 10(9) cells/g). Biomass could also be sustained where solutes can be delivered by advection (cryosuction) or diffusion in interfacial water; however, both of these processes are relatively inefficient. Lithoautotrophs in near-surface thin films of water, therefore, would optimize their metabolism by deriving energy and nutrients locally. Although the selected chemistry and associated model output indicate that lithoautotrophic microbial biomass could accrue within shallow interfacial water on Mars, it is likely that these organisms would spend long periods in maintenance or survival modes, with instantaneous biomass comparable to or less than that observed in extreme environments on Earth.

  17. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  18. Belief Propagation Algorithm for Portfolio Optimization Problems

    PubMed Central

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462

  19. Belief Propagation Algorithm for Portfolio Optimization Problems.

    PubMed

    Shinzato, Takashi; Yasuda, Muneki

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.

  20. Approximated analytical solution to an Ebola optimal control problem

    NASA Astrophysics Data System (ADS)

    Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.

    2016-11-01

    An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.

  1. Optimal causal inference: estimating stored information and approximating causal architecture.

    PubMed

    Still, Susanne; Crutchfield, James P; Ellison, Christopher J

    2010-09-01

    We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.

  2. Strong convergence and convergence rates of approximating solutions for algebraic Riccati equations in Hilbert spaces

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi

    1987-01-01

    The linear quadratic optimal control problem on infinite time interval for linear time-invariant systems defined on Hilbert spaces is considered. The optimal control is given by a feedback form in terms of solution pi to the associated algebraic Riccati equation (ARE). A Ritz type approximation is used to obtain a sequence pi sup N of finite dimensional approximations of the solution to ARE. A sufficient condition that shows pi sup N converges strongly to pi is obtained. Under this condition, a formula is derived which can be used to obtain a rate of convergence of pi sup N to pi. The results of the Galerkin approximation is demonstrated and applied for parabolic systems and the averaging approximation for hereditary differential systems.

  3. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  4. Optimal approximations for risk measures of sums of lognormals based on conditional expectations

    NASA Astrophysics Data System (ADS)

    Vanduffel, S.; Chen, X.; Dhaene, J.; Goovaerts, M.; Henrard, L.; Kaas, R.

    2008-11-01

    In this paper we investigate the approximations for the distribution function of a sum S of lognormal random variables. These approximations are obtained by considering the conditional expectation E[S|[Lambda

  5. Trajectories for High Specific Impulse High Specific Power Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Polsgrove, T.; Adams, R. B.; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    Preliminary results are presented for two methods to approximate the mission performance of high specific impulse high specific power vehicles. The first method is based on an analytical approximation derived by Williams and Shepherd and can be used to approximate mission performance to outer planets and interstellar space. The second method is based on a parametric analysis of trajectories created using the well known trajectory optimization code, VARITOP. This parametric analysis allows the reader to approximate payload ratios and optimal power requirements for both one-way and round-trip missions. While this second method only addresses missions to and from Jupiter, future work will encompass all of the outer planet destinations and some interstellar precursor missions.

  6. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  7. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    NASA Astrophysics Data System (ADS)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  8. Multivariate Copula Analysis Toolbox (MvCAT): Describing dependence and underlying uncertainty using a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Sadegh, Mojtaba; Ragno, Elisa; AghaKouchak, Amir

    2017-06-01

    We present a newly developed Multivariate Copula Analysis Toolbox (MvCAT) which includes a wide range of copula families with different levels of complexity. MvCAT employs a Bayesian framework with a residual-based Gaussian likelihood function for inferring copula parameters and estimating the underlying uncertainties. The contribution of this paper is threefold: (a) providing a Bayesian framework to approximate the predictive uncertainties of fitted copulas, (b) introducing a hybrid-evolution Markov Chain Monte Carlo (MCMC) approach designed for numerical estimation of the posterior distribution of copula parameters, and (c) enabling the community to explore a wide range of copulas and evaluate them relative to the fitting uncertainties. We show that the commonly used local optimization methods for copula parameter estimation often get trapped in local minima. The proposed method, however, addresses this limitation and improves describing the dependence structure. MvCAT also enables evaluation of uncertainties relative to the length of record, which is fundamental to a wide range of applications such as multivariate frequency analysis.

  9. Optimal prescribed burn frequency to manage foundation California perennial grass species and enhance native flora

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsen, Tina M.; Espeland, Erin K.; Paterson, Lisa E.

    Grasslands can be diverse assemblages of grasses and forbs but not much is known how perennial grass species management affects native plant diversity except in a few instances. We studied the use of late-spring prescribed burns over a span of 11 years where the perennial grass Poa secunda was the foundation species, with four additional years of measurements after the final burn. We also evaluated burn effects on P. secunda, the rare native annual forb Amsinckia grandiflora and local native and exotic species. Annual burning maintained P. secunda number, resulted in significant expansion, the lowest thatch and exotic grass cover,more » the highest percentage of bare ground, but also the lowest native forb and highest exotic forb cover. Burning approximately every 3 years maintained a lower number of P. secunda plants, allowed for expansion, and resulted in the highest native forb cover with a low exotic grass cover. Burning approximately every 5 years and the control (burned once from a wildfire) resulted in a decline in P. secunda number, the highest exotic grass and thatch cover and the lowest percentage of bare ground. P. secunda numbers were maintained up to 4 years after the final burn. And while local native forbs benefited from burning approximately every 3 years, planted A. grandiflora performed best in the control treatment. A. grandiflora did not occur naturally at the site; therefore, no seed bank was present to provide across-year protection from the effects of the burns. Thus, perennial grass species management must also consider other native species life history and phenology to enhance native flora diversity.« less

  10. Optimal prescribed burn frequency to manage foundation California perennial grass species and enhance native flora

    DOE PAGES

    Carlsen, Tina M.; Espeland, Erin K.; Paterson, Lisa E.; ...

    2017-06-06

    Grasslands can be diverse assemblages of grasses and forbs but not much is known how perennial grass species management affects native plant diversity except in a few instances. We studied the use of late-spring prescribed burns over a span of 11 years where the perennial grass Poa secunda was the foundation species, with four additional years of measurements after the final burn. We also evaluated burn effects on P. secunda, the rare native annual forb Amsinckia grandiflora and local native and exotic species. Annual burning maintained P. secunda number, resulted in significant expansion, the lowest thatch and exotic grass cover,more » the highest percentage of bare ground, but also the lowest native forb and highest exotic forb cover. Burning approximately every 3 years maintained a lower number of P. secunda plants, allowed for expansion, and resulted in the highest native forb cover with a low exotic grass cover. Burning approximately every 5 years and the control (burned once from a wildfire) resulted in a decline in P. secunda number, the highest exotic grass and thatch cover and the lowest percentage of bare ground. P. secunda numbers were maintained up to 4 years after the final burn. And while local native forbs benefited from burning approximately every 3 years, planted A. grandiflora performed best in the control treatment. A. grandiflora did not occur naturally at the site; therefore, no seed bank was present to provide across-year protection from the effects of the burns. Thus, perennial grass species management must also consider other native species life history and phenology to enhance native flora diversity.« less

  11. A Novel Locally Linear KNN Method With Applications to Visual Recognition.

    PubMed

    Liu, Qingfeng; Liu, Chengjun

    2017-09-01

    A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kent, Paul R.; Krogel, Jaron T.

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  13. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  14. Formal expressions and corresponding expansions for the exact Kohn-Sham exchange potential

    NASA Astrophysics Data System (ADS)

    Bulat, Felipe A.; Levy, Mel

    2009-11-01

    Formal expressions and their corresponding expansions in terms of Kohn-Sham (KS) orbitals are deduced for the exchange potential vx(r) . After an alternative derivation of the basic optimized effective potential integrodifferential equations is given through a Hartree-Fock adiabatic connection perturbation theory, we present an exact infinite expansion for vx(r) that is particularly simple in structure. It contains the very same occupied-virtual quantities that appear in the well-known optimized effective potential integral equation, but in this new expression vx(r) is isolated on one side of the equation. An orbital-energy modified Slater potential is its leading term which gives encouraging numerical results. Along different lines, while the earlier Krieger-Li-Iafrate approximation truncates completely the necessary first-order perturbation orbitals, we observe that the improved localized Hartree-Fock (LHF) potential, or common energy denominator potential (CEDA), or effective local potential (ELP), incorporates the part of each first-order orbital that consists of the occupied KS orbitals. With this in mind, the exact correction to the LHF, CEDA, or ELP potential (they are all equivalent) is deduced and displayed in terms of the virtual portions of the first-order orbitals. We close by observing that the newly derived exact formal expressions and corresponding expansions apply as well for obtaining the correlation potential from an orbital-dependent correlation energy functional.

  15. Test particle propagation in magnetostatic turbulence. 2: The local approximation method

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.

    1976-01-01

    An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.

  16. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  17. Recent developments in LIBXC - A comprehensive library of functionals for density functional theory

    NASA Astrophysics Data System (ADS)

    Lehtola, Susi; Steigemann, Conrad; Oliveira, Micael J. T.; Marques, Miguel A. L.

    2018-01-01

    LIBXC is a library of exchange-correlation functionals for density-functional theory. We are concerned with semi-local functionals (or the semi-local part of hybrid functionals), namely local-density approximations, generalized-gradient approximations, and meta-generalized-gradient approximations. Currently we include around 400 functionals for the exchange, correlation, and the kinetic energy, spanning more than 50 years of research. Moreover, LIBXC is by now used by more than 20 codes, not only from the atomic, molecular, and solid-state physics, but also from the quantum chemistry communities.

  18. Effect of design selection on response surface performance

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1993-01-01

    The mathematical formulation of the engineering optimization problem is given. Evaluation of the objective function and constraint equations can be very expensive in a computational sense. Thus, it is desirable to use as few evaluations as possible in obtaining its solution. In solving the equation, one approach is to develop approximations to the objective function and/or restraint equations and then to solve the equation using the approximations in place of the original functions. These approximations are referred to as response surfaces. The desirability of using response surfaces depends upon the number of functional evaluations required to build the response surfaces compared to the number required in the direct solution of the equation without approximations. The present study is concerned with evaluating the performance of response surfaces so that a decision can be made as to their effectiveness in optimization applications. In particular, this study focuses on how the quality of approximations is effected by design selection. Polynomial approximations and neural net approximations are considered.

  19. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175

  20. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  1. Estimation of optimal educational cost per medical student.

    PubMed

    Yang, Eunbae B; Lee, Seunghee

    2009-09-01

    This study aims to estimate the optimal educational cost per medical student. A private medical college in Seoul was targeted by the study, and its 2006 learning environment and data from the 2003~2006 budget and settlement were carefully analyzed. Through interviews with 3 medical professors and 2 experts in the economics of education, the study attempted to establish the educational cost estimation model, which yields an empirically computed estimate of the optimal cost per student in medical college. The estimation model was based primarily upon the educational cost which consisted of direct educational costs (47.25%), support costs (36.44%), fixed asset purchases (11.18%) and costs for student affairs (5.14%). These results indicate that the optimal cost per student is approximately 20,367,000 won each semester; thus, training a doctor costs 162,936,000 won over 4 years. Consequently, we inferred that the tuition levels of a local medical college or professional medical graduate school cover one quarter or one-half of the per- student cost. The findings of this study do not necessarily imply an increase in medical college tuition; the estimation of the per-student cost for training to be a doctor is one matter, and the issue of who should bear this burden is another. For further study, we should consider the college type and its location for general application of the estimation method, in addition to living expenses and opportunity costs.

  2. An evaluation of methods for estimating the number of local optima in combinatorial optimization problems.

    PubMed

    Hernando, Leticia; Mendiburu, Alexander; Lozano, Jose A

    2013-01-01

    The solution of many combinatorial optimization problems is carried out by metaheuristics, which generally make use of local search algorithms. These algorithms use some kind of neighborhood structure over the search space. The performance of the algorithms strongly depends on the properties that the neighborhood imposes on the search space. One of these properties is the number of local optima. Given an instance of a combinatorial optimization problem and a neighborhood, the estimation of the number of local optima can help not only to measure the complexity of the instance, but also to choose the most convenient neighborhood to solve it. In this paper we review and evaluate several methods to estimate the number of local optima in combinatorial optimization problems. The methods reviewed not only come from the combinatorial optimization literature, but also from the statistical literature. A thorough evaluation in synthetic as well as real problems is given. We conclude by providing recommendations of methods for several scenarios.

  3. Intelligent control and cooperation for mobile robots

    NASA Astrophysics Data System (ADS)

    Stingu, Petru Emanuel

    The topic discussed in this work addresses the current research being conducted at the Automation & Robotics Research Institute in the areas of UAV quadrotor control and heterogenous multi-vehicle cooperation. Autonomy can be successfully achieved by a robot under the following conditions: the robot has to be able to acquire knowledge about the environment and itself, and it also has to be able to reason under uncertainty. The control system must react quickly to immediate challenges, but also has to slowly adapt and improve based on accumulated knowledge. The major contribution of this work is the transfer of the ADP algorithms from the purely theoretical environment to the complex real-world robotic platforms that work in real-time and in uncontrolled environments. Many solutions are adopted from those present in nature because they have been proven to be close to optimal in very different settings. For the control of a single platform, reinforcement learning algorithms are used to design suboptimal controllers for a class of complex systems that can be conceptually split in local loops with simpler dynamics and relatively weak coupling to the rest of the system. Optimality is enforced by having a global critic but the curse of dimensionality is avoided by using local actors and intelligent pre-processing of the information used for learning the optimal controllers. The system model is used for constructing the structure of the control system, but on top of that the adaptive neural networks that form the actors use the knowledge acquired during normal operation to get closer to optimal control. In real-world experiments, efficient learning is a strong requirement for success. This is accomplished by using an approximation of the system model to focus the learning for equivalent configurations of the state space. Due to the availability of only local data for training, neural networks with local activation functions are implemented. For the control of a formation of robots subjected to dynamic communication constraints, game theory is used in addition to reinforcement learning. The nodes maintain an extra set of state variables about all the other nodes that they can communicate to. The more important are trust and predictability. They are a way to incorporate knowledge acquired in the past into the control decisions taken by each node. The trust variable provides a simple mechanism for the implementation of reinforcement learning. For robot formations, potential field based control algorithms are used to generate the control commands. The formation structure changes due to the environment and due to the decisions of the nodes. It is a problem of building a graph and coalitions by having distributed decisions but still reaching an optimal behavior globally.

  4. Subcritical transition scenarios via linear and nonlinear localized optimal perturbations in plane Poiseuille flow

    NASA Astrophysics Data System (ADS)

    Farano, Mirko; Cherubini, Stefania; Robinet, Jean-Christophe; De Palma, Pietro

    2016-12-01

    Subcritical transition in plane Poiseuille flow is investigated by means of a Lagrange-multiplier direct-adjoint optimization procedure with the aim of finding localized three-dimensional perturbations optimally growing in a given time interval (target time). Space localization of these optimal perturbations (OPs) is achieved by choosing as objective function either a p-norm (with p\\gg 1) of the perturbation energy density in a linear framework; or the classical (1-norm) perturbation energy, including nonlinear effects. This work aims at analyzing the structure of linear and nonlinear localized OPs for Poiseuille flow, and comparing their transition thresholds and scenarios. The nonlinear optimization approach provides three types of solutions: a weakly nonlinear, a hairpin-like and a highly nonlinear optimal perturbation, depending on the value of the initial energy and the target time. The former shows localization only in the wall-normal direction, whereas the latter appears much more localized and breaks the spanwise symmetry found at lower target times. Both solutions show spanwise inclined vortices and large values of the streamwise component of velocity already at the initial time. On the other hand, p-norm optimal perturbations, although being strongly localized in space, keep a shape similar to linear 1-norm optimal perturbations, showing streamwise-aligned vortices characterized by low values of the streamwise velocity component. When used for initializing direct numerical simulations, in most of the cases nonlinear OPs provide the most efficient route to transition in terms of time to transition and initial energy, even when they are less localized in space than the p-norm OP. The p-norm OP follows a transition path similar to the oblique transition scenario, with slightly oscillating streaks which saturate and eventually experience secondary instability. On the other hand, the nonlinear OP rapidly forms large-amplitude bent streaks and skips the phases of streak saturation, providing a contemporary growth of all of the velocity components due to strong nonlinear coupling.

  5. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  6. Partial discharge localization in power transformers based on the sequential quadratic programming-genetic algorithm adopting acoustic emission techniques

    NASA Astrophysics Data System (ADS)

    Liu, Hua-Long; Liu, Hua-Dong

    2014-10-01

    Partial discharge (PD) in power transformers is one of the prime reasons resulting in insulation degradation and power faults. Hence, it is of great importance to study the techniques of the detection and localization of PD in theory and practice. The detection and localization of PD employing acoustic emission (AE) techniques, as a kind of non-destructive testing, plus due to the advantages of powerful capability of locating and high precision, have been paid more and more attention. The localization algorithm is the key factor to decide the localization accuracy in AE localization of PD. Many kinds of localization algorithms exist for the PD source localization adopting AE techniques including intelligent and non-intelligent algorithms. However, the existed algorithms possess some defects such as the premature convergence phenomenon, poor local optimization ability and unsuitability for the field applications. To overcome the poor local optimization ability and easily caused premature convergence phenomenon of the fundamental genetic algorithm (GA), a new kind of improved GA is proposed, namely the sequence quadratic programming-genetic algorithm (SQP-GA). For the hybrid optimization algorithm, SQP-GA, the sequence quadratic programming (SQP) algorithm which is used as a basic operator is integrated into the fundamental GA, so the local searching ability of the fundamental GA is improved effectively and the premature convergence phenomenon is overcome. Experimental results of the numerical simulations of benchmark functions show that the hybrid optimization algorithm, SQP-GA, is better than the fundamental GA in the convergence speed and optimization precision, and the proposed algorithm in this paper has outstanding optimization effect. At the same time, the presented SQP-GA in the paper is applied to solve the ultrasonic localization problem of PD in transformers, then the ultrasonic localization method of PD in transformers based on the SQP-GA is proposed. And localization results based on the SQP-GA are compared with some algorithms such as the GA, some other intelligent and non-intelligent algorithms. The results of calculating examples both stimulated and spot experiments demonstrate that the localization method based on the SQP-GA can effectively prevent the results from getting trapped into the local optimum values, and the localization method is of great feasibility and very suitable for the field applications, and the precision of localization is enhanced, and the effectiveness of localization is ideal and satisfactory.

  7. A parallel competitive Particle Swarm Optimization for non-linear first arrival traveltime tomography and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François

    2018-04-01

    Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.

  8. Calculations of the excitation energies of all-trans and 11,12s-dicis retinals using localized molecular orbitals obtained by the elongation method

    NASA Astrophysics Data System (ADS)

    Kurihara, Youji; Aoki, Yuriko; Imamura, Akira

    1997-09-01

    In the present article, the excitation energies of the all-trans and the 11,12s-dicis retinals were calculated by using the elongation method. The geometries of these molecules were optimized with the 4-31G basis set by using the GAUSSIAN 92 program. The wave functions for the calculation of the excitation energies were obtained with CNDO/S approximation by the elongation method, which enables us to analyze electronic structures of aperiodic polymers in terms of the exciton-type local excitation and the charge transfer-type excitation. The excitation energies were calculated by using the single excitation configuration interaction (SECI) on the basis of localized molecular orbitals (LMOs). The LMOs were obtained in the process of the elongation method. The configuration interaction (CI) matrices were diagonalized by Davidson's method. The calculated results were in good agreement with the experimental data for absorption spectra. In order to consider the isomerization path from 11,12s-dicis to all-trans retinals, the barriers to the rotations about C11-C12 double and C12-C13 single bonds were evaluated.

  9. Compressed modes for variational problems in mathematical physics and compactly supported multiresolution basis for the Laplace operator

    NASA Astrophysics Data System (ADS)

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2014-03-01

    We will describe a general formalism for obtaining spatially localized (``sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (``compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO).

  10. Nonlinearity without superluminality

    NASA Astrophysics Data System (ADS)

    Kent, Adrian

    2005-07-01

    Quantum theory is compatible with special relativity. In particular, though measurements on entangled systems are correlated in a way that cannot be reproduced by local hidden variables, they cannot be used for superluminal signaling. As Czachor, Gisin, and Polchinski pointed out, this is not generally true of general nonlinear modifications of the Schrödinger equation. Excluding superluminal signaling has thus been taken to rule out most nonlinear versions of quantum theory. The no-superluminal-signaling constraint has also been used for alternative derivations of the optimal fidelities attainable for imperfect quantum cloning and other operations. These results apply to theories satisfying the rule that their predictions for widely separated and slowly moving entangled systems can be approximated by nonrelativistic equations of motion with respect to a preferred time coordinate. This paper describes a natural way in which this rule might fail to hold. In particular, it is shown that quantum readout devices which display the values of localized pure states need not allow superluminal signaling, provided that the devices display the values of the states of entangled subsystems as defined in a nonstandard, although natural, way. It follows that any locally defined nonlinear evolution of pure states can be made consistent with Minkowski causality.

  11. OPTRAN- OPTIMAL LOW THRUST ORBIT TRANSFERS

    NASA Technical Reports Server (NTRS)

    Breakwell, J. V.

    1994-01-01

    OPTRAN is a collection of programs that solve the problem of optimal low thrust orbit transfers between non-coplanar circular orbits for spacecraft with chemical propulsion systems. The programs are set up to find Hohmann-type solutions, with burns near the perigee and apogee of the transfer orbit. They will solve both fairly long burn-arc transfers and "divided-burn" transfers. Program modeling includes a spherical earth gravity model and propulsion system models for either constant thrust or constant acceleration. The solutions obtained are optimal with respect to fuel use: i.e., final mass of the spacecraft is maximized with respect to the controls. The controls are the direction of thrust and the thrust on/off times. Two basic types of programs are provided in OPTRAN. The first type is for "exact solution" which results in complete, exact tkme-histories. The exact spacecraft position, velocity, and optimal thrust direction are given throughout the maneuver, as are the optimal thrust switch points, the transfer time, and the fuel costs. Exact solution programs are provided in two versions for non-coplanar transfers and in a fast version for coplanar transfers. The second basic type is for "approximate solutions" which results in approximate information on the transfer time and fuel costs. The approximate solution is used to estimate initial conditions for the exact solution. It can be used in divided-burn transfers to find the best number of burns with respect to time. The approximate solution is useful by itself in relatively efficient, short burn-arc transfers. These programs are written in FORTRAN 77 for batch execution and have been implemented on a DEC VAX series computer with the largest program having a central memory requirement of approximately 54K of 8 bit bytes. The OPTRAN program were developed in 1983.

  12. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaitsgory, Vladimir, E-mail: vladimir.gaitsgory@mq.edu.au; Rossomakhine, Sergey, E-mail: serguei.rossomakhine@flinders.edu.au

    The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem ofmore » optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.« less

  14. Multidisciplinary design optimization of aircraft wing structures with aeroelastic and aeroservoelastic constraints

    NASA Astrophysics Data System (ADS)

    Jung, Sang-Young

    Design procedures for aircraft wing structures with control surfaces are presented using multidisciplinary design optimization. Several disciplines such as stress analysis, structural vibration, aerodynamics, and controls are considered simultaneously and combined for design optimization. Vibration data and aerodynamic data including those in the transonic regime are calculated by existing codes. Flutter analyses are performed using those data. A flutter suppression method is studied using control laws in the closed-loop flutter equation. For the design optimization, optimization techniques such as approximation, design variable linking, temporary constraint deletion, and optimality criteria are used. Sensitivity derivatives of stresses and displacements for static loads, natural frequency, flutter characteristics, and control characteristics with respect to design variables are calculated for an approximate optimization. The objective function is the structural weight. The design variables are the section properties of the structural elements and the control gain factors. Existing multidisciplinary optimization codes (ASTROS* and MSC/NASTRAN) are used to perform single and multiple constraint optimizations of fully built up finite element wing structures. Three benchmark wing models are developed and/or modified for this purpose. The models are tested extensively.

  15. Numerical approximation for the infinite-dimensional discrete-time optimal linear-quadratic regulator problem

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.

  16. Minimal entropy approximation for cellular automata

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk

    2014-02-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.

  17. Approximation of Optimal Infinite Dimensional Compensators for Flexible Structures

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Mingori, D. L.; Adamian, A.; Jabbari, F.

    1985-01-01

    The infinite dimensional compensator for a large class of flexible structures, modeled as distributed systems are discussed, as well as an approximation scheme for designing finite dimensional compensators to approximate the infinite dimensional compensator. The approximation scheme is applied to develop a compensator for a space antenna model based on wrap-rib antennas being built currently. While the present model has been simplified, it retains the salient features of rigid body modes and several distributed components of different characteristics. The control and estimator gains are represented by functional gains, which provide graphical representations of the control and estimator laws. These functional gains also indicate the convergence of the finite dimensional compensators and show which modes the optimal compensator ignores.

  18. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  19. Optimal moving grids for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Wathen, A. J.

    1989-01-01

    Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of partial differential equation solutions in the least squares norm.

  20. Optimal moving grids for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Wathen, A. J.

    1992-01-01

    Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of PDE solutions in the least-squares norm are reported.

  1. Pattern formations and optimal packing.

    PubMed

    Mityushev, Vladimir

    2016-04-01

    Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Design optimization of the sensor spatial arrangement in a direct magnetic field-based localization system for medical applications.

    PubMed

    Marechal, Luc; Shaohui Foong; Zhenglong Sun; Wood, Kristin L

    2015-08-01

    Motivated by the need for developing a neuronavigation system to improve efficacy of intracranial surgical procedures, a localization system using passive magnetic fields for real-time monitoring of the insertion process of an external ventricular drain (EVD) catheter is conceived and developed. This system operates on the principle of measuring the static magnetic field of a magnetic marker using an array of magnetic sensors. An artificial neural network (ANN) is directly used for solving the inverse problem of magnetic dipole localization for improved efficiency and precision. As the accuracy of localization system is highly dependent on the sensor spatial location, an optimization framework, based on understanding and classification of experimental sensor characteristics as well as prior knowledge of the general trajectory of the localization pathway, for design of such sensing assemblies is described and investigated in this paper. Both optimized and non-optimized sensor configurations were experimentally evaluated and results show superior performance from the optimized configuration. While the approach presented here utilizes ventriculostomy as an illustrative platform, it can be extended to other medical applications that require localization inside the body.

  3. A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2009-01-01

    In many complex engineered systems, the ability to give an alarm prior to impending critical events is of great importance. These critical events may have varying degrees of severity, and in fact they may occur during normal system operation. In this article, we investigate approximations to theoretically optimal methods of designing alarm systems for the prediction of level-crossings by a zero-mean stationary linear dynamic system driven by Gaussian noise. An optimal alarm system is designed to elicit the fewest false alarms for a fixed detection probability. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity. I

  4. Optimal percolation on multiplex networks.

    PubMed

    Osat, Saeed; Faqeeh, Ali; Radicchi, Filippo

    2017-11-16

    Optimal percolation is the problem of finding the minimal set of nodes whose removal from a network fragments the system into non-extensive disconnected clusters. The solution to this problem is important for strategies of immunization in disease spreading, and influence maximization in opinion dynamics. Optimal percolation has received considerable attention in the context of isolated networks. However, its generalization to multiplex networks has not yet been considered. Here we show that approximating the solution of the optimal percolation problem on a multiplex network with solutions valid for single-layer networks extracted from the multiplex may have serious consequences in the characterization of the true robustness of the system. We reach this conclusion by extending many of the methods for finding approximate solutions of the optimal percolation problem from single-layer to multiplex networks, and performing a systematic analysis on synthetic and real-world multiplex networks.

  5. Approximate optimal guidance for the advanced launch system

    NASA Technical Reports Server (NTRS)

    Feeley, T. S.; Speyer, J. L.

    1993-01-01

    A real-time guidance scheme for the problem of maximizing the payload into orbit subject to the equations of motion for a rocket over a spherical, non-rotating earth is presented. An approximate optimal launch guidance law is developed based upon an asymptotic expansion of the Hamilton - Jacobi - Bellman or dynamic programming equation. The expansion is performed in terms of a small parameter, which is used to separate the dynamics of the problem into primary and perturbation dynamics. For the zeroth-order problem the small parameter is set to zero and a closed-form solution to the zeroth-order expansion term of Hamilton - Jacobi - Bellman equation is obtained. Higher-order terms of the expansion include the effects of the neglected perturbation dynamics. These higher-order terms are determined from the solution of first-order linear partial differential equations requiring only the evaluation of quadratures. This technique is preferred as a real-time, on-line guidance scheme to alternative numerical iterative optimization schemes because of the unreliable convergence properties of these iterative guidance schemes and because the quadratures needed for the approximate optimal guidance law can be performed rapidly and by parallel processing. Even if the approximate solution is not nearly optimal, when using this technique the zeroth-order solution always provides a path which satisfies the terminal constraints. Results for two-degree-of-freedom simulations are presented for the simplified problem of flight in the equatorial plane and compared to the guidance scheme generated by the shooting method which is an iterative second-order technique.

  6. Electronic-structure calculations of praseodymium metal by means of modified density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svane, A.; Trygg, J.; Johansson, B.

    1997-09-01

    Electronic-structure calculations of elemental praseodymium are presented. Several approximations are used to describe the Pr f electrons. It is found that the low-pressure, trivalent phase is well described using either the self-interaction corrected (SIC) local-spin-density (LSD) approximation or the generalized-gradient approximation (GGA) with spin and orbital polarization (OP). In the SIC-LSD approach the Pr f electrons are treated explicitly as localized with a localization energy given by the self-interaction of the f orbital. In the GGA+OP scheme the f-electron localization is described by the onset of spin and orbital polarization, the energetics of which is described by spin-moment formation energymore » and a term proportional to the total orbital moment, L{sub z}{sup 2}. The high-pressure phase is well described with the f electrons treated as band electrons, in either the LSD or the GGA approximations, of which the latter describes more accurately the experimental equation of state. The calculated pressure of the transition from localized to delocalized behavior is 280 kbar in the SIC-LSD approximation and 156 kbar in the GGA+OP approach, both comparing favorably with the experimentally observed transition pressure of 210 kbar. {copyright} {ital 1997} {ital The American Physical Society}« less

  7. Zeolite formation from coal fly ash and its adsorption potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duangkamol Ruen-ngam; Doungmanee Rungsuk; Ronbanchob Apiratikul

    The possibility in converting coal fly ash (CFA) to zeolite was evaluated. CFA samples from the local power plant in Prachinburi province, Thailand, were collected during a 3-month time span to account for the inconsistency of the CFA quality, and it was evident that the deviation of the quality of the raw material did not have significant effects on the synthesis. The zeolite product was found to be type X. The most suitable weight ratio of sodium hydroxide (NaOH) to CFA was approximately 2.25, because this gave reasonably high zeolite yield with good cation exchange capacity (CEC). The silica (Si)-to-aluminummore » (Al) molar ratio of 4.06 yielded the highest crystallinity level for zeolite X at 79% with a CEC of 240 meq/100 g and a surface area of 325 m{sup 2}/g. Optimal crystallization temperature and time were 90{sup o}C and 4 hr, respectively, which gave the highest CEC of approximately 305 meq/100 g. Yields obtained from all experiments were in the range of 50-72%. 29 refs., 5 tabs., 7 figs.« less

  8. Solution NMR investigation of the response of the lactose repressor core domain dimer to hydrostatic pressure.

    PubMed

    Fuglestad, Brian; Stetz, Matthew A; Belnavis, Zachary; Wand, A Joshua

    2017-12-01

    Previous investigations of the sensitivity of the lac repressor to high-hydrostatic pressure have led to varying conclusions. Here high-pressure solution NMR spectroscopy is used to provide an atomic level view of the pressure induced structural transition of the lactose repressor regulatory domain (LacI* RD) bound to the ligand IPTG. As the pressure is raised from ambient to 3kbar the native state of the protein is converted to a partially unfolded form. Estimates of rotational correlation times using transverse optimized relaxation indicates that a monomeric state is never reached and that the predominate form of the LacI* RD is dimeric throughout this pressure change. Spectral analysis suggests that the pressure-induced transition is localized and is associated with a volume change of approximately -115mlmol -1 and an average pressure dependent change in compressibility of approximately 30mlmol -1 kbar -1 . In addition, a subset of resonances emerge at high-pressures indicating the presence of a non-native but folded alternate state. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. On the Angular Dependence of the Vicinal Fluorine-Fluorine Coupling Constant in 1,2-Difluoroethane:  Deviation from a Karplus-like Shape.

    PubMed

    Provasi, Patricio F; Sauer, Stephan P A

    2006-07-01

    The angular dependence of the vicinal fluorine-fluorine coupling constant, (3)JFF, for 1,2-difluoroethane has been investigated with several polarization propagator methods. (3)JFF and its four Ramsey contributions were calculated using the random phase approximation (RPA), its multiconfigurational generalization, and both second-order polarization propagator approximations (SOPPA and SOPPA(CCSD)), using locally dense basis sets. The geometries were optimized for each dihedral angle at the level of density functional theory using the B3LYP functional and fourth-order Møller-Plesset perturbation theory. The resulting coupling constant curves were fitted to a cosine series with 8 coefficients. Our results are compared with those obtained previously and values estimated from experiment. It is found that the inclusion of electron correlation in the calculation of (3)JFF reduces the absolute values. This is mainly due to changes in the FC contribution, which for dihedral angles around the trans conformation even changes its sign. This sign change is responsible for the breakdown of the Karplus-like curve.

  10. Discontinuous finite volume element discretization for coupled flow-transport problems arising in models of sedimentation

    NASA Astrophysics Data System (ADS)

    Bürger, Raimund; Kumar, Sarvesh; Ruiz-Baier, Ricardo

    2015-10-01

    The sedimentation-consolidation and flow processes of a mixture of small particles dispersed in a viscous fluid at low Reynolds numbers can be described by a nonlinear transport equation for the solids concentration coupled with the Stokes problem written in terms of the mixture flow velocity and the pressure field. Here both the viscosity and the forcing term depend on the local solids concentration. A semi-discrete discontinuous finite volume element (DFVE) scheme is proposed for this model. The numerical method is constructed on a baseline finite element family of linear discontinuous elements for the approximation of velocity components and concentration field, whereas the pressure is approximated by piecewise constant elements. The unique solvability of both the nonlinear continuous problem and the semi-discrete DFVE scheme is discussed, and optimal convergence estimates in several spatial norms are derived. Properties of the model and the predicted space accuracy of the proposed formulation are illustrated by detailed numerical examples, including flows under gravity with changing direction, a secondary settling tank in an axisymmetric setting, and batch sedimentation in a tilted cylindrical vessel.

  11. Design Sensitivity for a Subsonic Aircraft Predicted by Neural Network and Regression Models

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Patnaik, Surya N.

    2005-01-01

    A preliminary methodology was obtained for the design optimization of a subsonic aircraft by coupling NASA Langley Research Center s Flight Optimization System (FLOPS) with NASA Glenn Research Center s design optimization testbed (COMETBOARDS with regression and neural network analysis approximators). The aircraft modeled can carry 200 passengers at a cruise speed of Mach 0.85 over a range of 2500 n mi and can operate on standard 6000-ft takeoff and landing runways. The design simulation was extended to evaluate the optimal airframe and engine parameters for the subsonic aircraft to operate on nonstandard runways. Regression and neural network approximators were used to examine aircraft operation on runways ranging in length from 4500 to 7500 ft.

  12. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  13. Volume of interest CBCT and tube current modulation for image guidance using dynamic kV collimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, David, E-mail: david.parsons@dal.ca, E-mail: james.robar@nshealth.ca; Robar, James L., E-mail: david.parsons@dal.ca, E-mail: james.robar@nshealth.ca

    2016-04-15

    Purpose: The focus of this work is the development of a novel blade collimation system enabling volume of interest (VOI) CBCT with tube current modulation using the kV image guidance source on a linear accelerator. Advantages of the system are assessed, particularly with regard to reduction and localization of dose and improvement of image quality. Methods: A four blade dynamic kV collimator was developed to track a VOI during a CBCT acquisition. The current prototype is capable of tracking an arbitrary volume defined by the treatment planner for subsequent CBCT guidance. During gantry rotation, the collimator tracks the VOI withmore » adjustment of position and dimension. CBCT image quality was investigated as a function of collimator dimension, while maintaining the same dose to the VOI, for a 22.2 cm diameter cylindrical water phantom with a 9 mm diameter bone insert centered on isocenter. Dose distributions were modeled using a dynamic BEAMnrc library and DOSXYZnrc. The resulting VOI dose distributions were compared to full-field CBCT distributions to quantify dose reduction and localization to the target volume. A novel method of optimizing x-ray tube current during CBCT acquisition was developed and assessed with regard to contrast-to-noise ratio (CNR) and imaging dose. Results: Measurements show that the VOI CBCT method using the dynamic blade system yields an increase in contrast-to-noise ratio by a factor of approximately 2.2. Depending upon the anatomical site, dose was reduced to 15%–80% of the full-field CBCT value along the central axis plane and down to less than 1% out of plane. The use of tube current modulation allowed for specification of a desired SNR within projection data. For approximately the same dose to the VOI, CNR was further increased by a factor of 1.2 for modulated VOI CBCT, giving a combined improvement of 2.6 compared to full-field CBCT. Conclusions: The present dynamic blade system provides significant improvements in CNR for the same imaging dose and localization of imaging dose to a predefined volume of interest. The approach is compatible with tube current modulation, allowing optimization of the imaging protocol.« less

  14. Robust Approximations to the Non-Null Distribution of the Product Moment Correlation Coefficient I: The Phi Coefficient.

    ERIC Educational Resources Information Center

    Edwards, Lynne K.; Meyers, Sarah A.

    Correlation coefficients are frequently reported in educational and psychological research. The robustness properties and optimality among practical approximations when phi does not equal 0 with moderate sample sizes are not well documented. Three major approximations and their variations are examined: (1) a normal approximation of Fisher's Z,…

  15. Binary optimization for source localization in the inverse problem of ECG.

    PubMed

    Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf

    2014-09-01

    The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.

  16. MN15-L: A New Local Exchange-Correlation Functional for Kohn-Sham Density Functional Theory with Broad Accuracy for Atoms, Molecules, and Solids.

    PubMed

    Yu, Haoyu S; He, Xiao; Truhlar, Donald G

    2016-03-08

    Kohn-Sham density functional theory is widely used for applications of electronic structure theory in chemistry, materials science, and condensed-matter physics, but the accuracy depends on the quality of the exchange-correlation functional. Here, we present a new local exchange-correlation functional called MN15-L that predicts accurate results for a broad range of molecular and solid-state properties including main-group bond energies, transition metal bond energies, reaction barrier heights, noncovalent interactions, atomic excitation energies, ionization potentials, electron affinities, total atomic energies, hydrocarbon thermochemistry, and lattice constants of solids. The MN15-L functional has the same mathematical form as a previous meta-nonseparable gradient approximation exchange-correlation functional, MN12-L, but it is improved because we optimized it against a larger database, designated 2015A, and included smoothness restraints; the optimization has a much better representation of transition metals. The mean unsigned error on 422 chemical energies is 2.32 kcal/mol, which is the best among all tested functionals, with or without nonlocal exchange. The MN15-L functional also provides good results for test sets that are outside the training set. A key issue is that the functional is local (no nonlocal exchange or nonlocal correlation), which makes it relatively economical for treating large and complex systems and solids. Another key advantage is that medium-range correlation energy is built in so that one does not need to add damped dispersion by molecular mechanics in order to predict accurate noncovalent binding energies. We believe that the MN15-L functional should be useful for a wide variety of applications in chemistry, physics, materials science, and molecular biology.

  17. Introducing a new methodology for the calculation of local philicity and multiphilic descriptor: an alternative to the finite difference approximation

    NASA Astrophysics Data System (ADS)

    Sánchez-Márquez, Jesús; Zorrilla, David; García, Víctor; Fernández, Manuel

    2018-07-01

    This work presents a new development based on the condensation scheme proposed by Chamorro and Pérez, in which new terms to correct the frozen molecular orbital approximation have been introduced (improved frontier molecular orbital approximation). The changes performed on the original development allow taking into account the orbital relaxation effects, providing equivalent results to those achieved by the finite difference approximation and leading also to a methodology with great advantages. Local reactivity indices based on this new development have been obtained for a sample set of molecules and they have been compared with those indices based on the frontier molecular orbital and finite difference approximations. A new definition based on the improved frontier molecular orbital methodology for the dual descriptor index is also shown. In addition, taking advantage of the characteristics of the definitions obtained with the new condensation scheme, the descriptor local philicity is analysed by separating the components corresponding to the frontier molecular orbital approximation and orbital relaxation effects, analysing also the local parameter multiphilic descriptor in the same way. Finally, the effect of using the basis set is studied and calculations using DFT, CI and Möller-Plesset methodologies are performed to analyse the consequence of different electronic-correlation levels.

  18. Profilometric characterization of DOEs with continuous microrelief

    NASA Astrophysics Data System (ADS)

    Korolkov, V. P.; Ostapenko, S. V.; Shimansky, R. V.

    2008-09-01

    Methodology of local characterization of continuous-relief diffractive optical elements has been discussed. The local profile depth can be evaluated using "approximated depth" defined without taking a profile near diffractive zone boundaries into account. Several methods to estimate the approximated depth have been offered.

  19. Task-based optimization of flip angle for fibrosis detection in T1-weighted MRI of liver

    PubMed Central

    Brand, Jonathan F.; Furenlid, Lars R.; Altbach, Maria I.; Galons, Jean-Philippe; Bhattacharyya, Achyut; Sharma, Puneet; Bhattacharyya, Tulshi; Bilgin, Ali; Martin, Diego R.

    2016-01-01

    Abstract. Chronic liver disease is a worldwide health problem, and hepatic fibrosis (HF) is one of the hallmarks of the disease. The current reference standard for diagnosing HF is biopsy followed by pathologist examination; however, this is limited by sampling error and carries a risk of complications. Pathology diagnosis of HF is based on textural change in the liver as a lobular collagen network that develops within portal triads. The scale of collagen lobules is characteristically in the order of 1 to 5 mm, which approximates the resolution limit of in vivo gadolinium-enhanced magnetic resonance imaging in the delayed phase. We use MRI of formalin-fixed human ex vivo liver samples as phantoms that mimic the textural contrast of in vivo Gd-MRI. We have developed a local texture analysis that is applied to phantom images, and the results are used to train model observers to detect HF. The performance of the observer is assessed with the area-under-the-receiver–operator-characteristic curve (AUROC) as the figure-of-merit. To optimize the MRI pulse sequence, phantoms were scanned with multiple times at a range of flip angles. The flip angle that was associated with the highest AUROC was chosen as optimal for the task of detecting HF. PMID:27446971

  20. Modeling and Control of a Delayed Hepatitis B Virus Model with Incubation Period and Combination Treatment.

    PubMed

    Sun, Deshun; Liu, Fei

    2018-06-01

    In this paper, a hepatitis B virus (HBV) model with an incubation period and delayed state and control variables is firstly proposed. Furthermore, the combination treatment is adopted to have a longer-lasting effect than mono-therapy. The equilibrium points and basic reproduction number are calculated, and then the local stability is analyzed on this model. We then present optimal control strategies based on the Pontryagin's minimum principle with an objective function not only to reduce the levels of exposed cells, infected cells and free viruses nearly to zero at the end of therapy, but also to minimize the drug side-effect and the cost of treatment. What's more, we develop a numerical simulation algorithm for solving our HBV model based on the combination of forward and backward difference approximations. The state dynamics of uninfected cells, exposed cells, infected cells, free viruses, CTL and ALT are simulated with or without optimal control, which show that HBV is reduced nearly to zero based on the time-varying optimal control strategies whereas the disease would break out without control. At last, by the simulations, we prove that strategy A is the best among the three kinds of strategies we adopt and further comparisons have been done between model (1) and model (2).

  1. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  2. Optimal Design of Grid-Stiffened Composite Panels Using Global and Local Buckling Analysis

    NASA Technical Reports Server (NTRS)

    Ambur, Damodar R.; Jaunky, Navin; Knight, Norman F., Jr.

    1996-01-01

    A design strategy for optimal design of composite grid-stiffened panels subjected to global and local buckling constraints is developed using a discrete optimizer. An improved smeared stiffener theory is used for the global buckling analysis. Local buckling of skin segments is assessed using a Rayleigh-Ritz method that accounts for material anisotropy and transverse shear flexibility. The local buckling of stiffener segments is also assessed. Design variables are the axial and transverse stiffener spacing, stiffener height and thickness, skin laminate, and stiffening configuration. The design optimization process is adapted to identify the lightest-weight stiffening configuration and pattern for grid stiffened composite panels given the overall panel dimensions, design in-plane loads, material properties, and boundary conditions of the grid-stiffened panel.

  3. Three-dimensional unstructured grid generation via incremental insertion and local optimization

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Wiltberger, N. Lyn; Gandhi, Amar S.

    1992-01-01

    Algorithms for the generation of 3D unstructured surface and volume grids are discussed. These algorithms are based on incremental insertion and local optimization. The present algorithms are very general and permit local grid optimization based on various measures of grid quality. This is very important; unlike the 2D Delaunay triangulation, the 3D Delaunay triangulation appears not to have a lexicographic characterization of angularity. (The Delaunay triangulation is known to minimize that maximum containment sphere, but unfortunately this is not true lexicographically). Consequently, Delaunay triangulations in three-space can result in poorly shaped tetrahedral elements. Using the present algorithms, 3D meshes can be constructed which optimize a certain angle measure, albeit locally. We also discuss the combinatorial aspects of the algorithm as well as implementational details.

  4. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  5. A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems

    DOE PAGES

    Kouri, Drew Philip

    2017-12-19

    In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less

  6. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT

    PubMed Central

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of “premature convergence,” that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment. PMID:29181020

  7. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT.

    PubMed

    Nie, Xiaohua; Wang, Wei; Nie, Haoyao

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of "premature convergence," that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment.

  8. Multidisciplinary design optimization - An emerging new engineering discipline

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1993-01-01

    A definition of the multidisciplinary design optimization (MDO) is introduced, and functionality and relationship of the MDO conceptual components are examined. The latter include design-oriented analysis, approximation concepts, mathematical system modeling, design space search, an optimization procedure, and a humane interface.

  9. Capturing planar shapes by approximating their outlines

    NASA Astrophysics Data System (ADS)

    Sarfraz, M.; Riyazuddin, M.; Baig, M. H.

    2006-05-01

    A non-deterministic evolutionary approach for approximating the outlines of planar shapes has been developed. Non-uniform Rational B-splines (NURBS) have been utilized as an underlying approximation curve scheme. Simulated Annealing heuristic is used as an evolutionary methodology. In addition to independent studies of the optimization of weight and knot parameters of the NURBS, a separate scheme has also been developed for the optimization of weights and knots simultaneously. The optimized NURBS models have been fitted over the contour data of the planar shapes for the ultimate and automatic output. The output results are visually pleasing with respect to the threshold provided by the user. A web-based system has also been developed for the effective and worldwide utilization. The objective of this system is to provide the facility to visualize the output to the whole world through internet by providing the freedom to the user for various desired input parameters setting in the algorithm designed.

  10. Reinforcement learning solution for HJB equation arising in constrained optimal control problem.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong

    2015-11-01

    The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Exploiting the locality of periodic subsystem density-functional theory: efficient sampling of the Brillouin zone.

    PubMed

    Genova, Alessandro; Pavanello, Michele

    2015-12-16

    In order to approximately satisfy the Bloch theorem, simulations of complex materials involving periodic systems are made n(k) times more complex by the need to sample the first Brillouin zone at n(k) points. By combining ideas from Kohn-Sham density-functional theory (DFT) and orbital-free DFT, for which no sampling is needed due to the absence of waves, subsystem DFT offers an interesting middle ground capable of sizable theoretical speedups against Kohn-Sham DFT. By splitting the supersystem into interacting subsystems, and mapping their quantum problem onto separate auxiliary Kohn-Sham systems, subsystem DFT allows an optimal topical sampling of the Brillouin zone. We elucidate this concept with two proof of principle simulations: a water bilayer on Pt[1 1 1]; and a complex system relevant to catalysis-a thiophene molecule physisorbed on a molybdenum sulfide monolayer deposited on top of an α-alumina support. For the latter system, a speedup of 300% is achieved against the subsystem DTF reference by using an optimized Brillouin zone sampling (600% against KS-DFT).

  12. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks.

    PubMed

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-06-06

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.

  13. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    PubMed Central

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-01-01

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304

  14. Computational Analysis of the Effect of Porosity on Shock Cell Strength at Cruise

    NASA Technical Reports Server (NTRS)

    Massey, Steven J.; Elmiligui, Alaa A.; Pao, S. Paul; Abdol-Hamid, Khaled S.; Hunter, Craig A.

    2006-01-01

    A computational flow field analysis is presented of the effect of core cowl porosity on shock cell strength for a modern separate flow nozzle at cruise conditions. The goal of this study was to identify the primary physical mechanisms by which the application of porosity can reduce shock cell strength and hence the broadband shock associated noise. The flow is simulated by solving the asymptotically steady, compressible, Reynoldsaveraged Navier-Stokes equations on a structured grid using an implicit, up-wind, flux-difference splitting finite volume scheme. The standard two-equation k - epsilon turbulence model with a linear stress representation is used with the addition of a eddy viscosity dependence on total temperature gradient normalized by local turbulence length scale. Specific issues addressed in this study were the optimal area required to weaken a shock impinging on the core cowl surface and the optimal level of porosity and placement of porous areas for reduction of the overall shock cell strength downstream. Two configurations of porosity were found to reduce downstream shock strength by approximately 50%.

  15. A novel partitioning method for block-structured adaptive meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Lin, E-mail: lin.fu@tum.de; Litvinov, Sergej, E-mail: sergej.litvinov@aer.mw.tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de

    We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtainmore » the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.« less

  16. Estimate the effective connectivity in multi-coupled neural mass model using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shan, Bonan; Wang, Jiang; Deng, Bin; Zhang, Zhen; Wei, Xile

    2017-03-01

    Assessment of the effective connectivity among different brain regions during seizure is a crucial problem in neuroscience today. As a consequence, a new model inversion framework of brain function imaging is introduced in this manuscript. This framework is based on approximating brain networks using a multi-coupled neural mass model (NMM). NMM describes the excitatory and inhibitory neural interactions, capturing the mechanisms involved in seizure initiation, evolution and termination. Particle swarm optimization method is used to estimate the effective connectivity variation (the parameters of NMM) and the epileptiform dynamics (the states of NMM) that cannot be directly measured using electrophysiological measurement alone. The estimated effective connectivity includes both the local connectivity parameters within a single region NMM and the remote connectivity parameters between multi-coupled NMMs. When the epileptiform activities are estimated, a proportional-integral controller outputs control signal so that the epileptiform spikes can be inhibited immediately. Numerical simulations are carried out to illustrate the effectiveness of the proposed framework. The framework and the results have a profound impact on the way we detect and treat epilepsy.

  17. A novel partitioning method for block-structured adaptive meshes

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Litvinov, Sergej; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-07-01

    We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.

  18. A nonlinear H-infinity approach to optimal control of the depth of anaesthesia

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Rigatou, Efthymia; Zervos, Nikolaos

    2016-12-01

    Controlling the level of anaesthesia is important for improving the success rate of surgeries and for reducing the risks to which operated patients are exposed. This paper proposes a nonlinear H-infinity approach to optimal control of the level of anaesthesia. The dynamic model of the anaesthesia, which describes the concentration of the anaesthetic drug in different parts of the body, is subjected to linearization at local operating points. These are defined at each iteration of the control algorithm and consist of the present value of the system's state vector and of the last control input that was exerted on it. For this linearization Taylor series expansion is performed and the system's Jacobian matrices are computed. For the linearized model an H-infinity controller is designed. The feedback control gains are found by solving at each iteration of the control algorithm an algebraic Riccati equation. The modelling errors due to this approximate linearization are considered as disturbances which are compensated by the robustness of the control loop. The stability of the control loop is confirmed through Lyapunov analysis.

  19. Optimal discrete-time LQR problems for parabolic systems with unbounded input: Approximation and convergence

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1988-01-01

    An abstract approximation and convergence theory for the closed-loop solution of discrete-time linear-quadratic regulator problems for parabolic systems with unbounded input is developed. Under relatively mild stabilizability and detectability assumptions, functional analytic, operator techniques are used to demonstrate the norm convergence of Galerkin-based approximations to the optimal feedback control gains. The application of the general theory to a class of abstract boundary control systems is considered. Two examples, one involving the Neumann boundary control of a one-dimensional heat equation, and the other, the vibration control of a cantilevered viscoelastic beam via shear input at the free end, are discussed.

  20. Geometrical optimization of a local ballistic magnetic sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanda, Yuhsuke; Hara, Masahiro; Nomura, Tatsuya

    2014-04-07

    We have developed a highly sensitive local magnetic sensor by using a ballistic transport property in a two-dimensional conductor. A semiclassical simulation reveals that the sensitivity increases when the geometry of the sensor and the spatial distribution of the local field are optimized. We have also experimentally demonstrated a clear observation of a magnetization process in a permalloy dot whose size is much smaller than the size of an optimized ballistic magnetic sensor fabricated from a GaAs/AlGaAs two-dimensional electron gas.

  1. Absolute phase estimation: adaptive local denoising and global unwrapping.

    PubMed

    Bioucas-Dias, Jose; Katkovnik, Vladimir; Astola, Jaakko; Egiazarian, Karen

    2008-10-10

    The paper attacks absolute phase estimation with a two-step approach: the first step applies an adaptive local denoising scheme to the modulo-2 pi noisy phase; the second step applies a robust phase unwrapping algorithm to the denoised modulo-2 pi phase obtained in the first step. The adaptive local modulo-2 pi phase denoising is a new algorithm based on local polynomial approximations. The zero-order and the first-order approximations of the phase are calculated in sliding windows of varying size. The zero-order approximation is used for pointwise adaptive window size selection, whereas the first-order approximation is used to filter the phase in the obtained windows. For phase unwrapping, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that the proposed algorithm yields state-of-the-art performance, enabling strong noise attenuation while preserving image details. (c) 2008 Optical Society of America

  2. Task-Driven Tube Current Modulation and Regularization Design in Computed Tomography with Penalized-Likelihood Reconstruction.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2016-02-01

    This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.

  3. Optimal designs based on the maximum quasi-likelihood estimator

    PubMed Central

    Shen, Gang; Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359

  4. Exploring the limit of accuracy for density functionals based on the generalized gradient approximation: Local, global hybrid, and range-separated hybrid functionals with and without dispersion corrections

    DOE PAGES

    Mardirossian, Narbe; Head-Gordon, Martin

    2014-03-25

    The limit of accuracy for semi-empirical generalized gradient approximation (GGA) density functionals is explored in this paper by parameterizing a variety of local, global hybrid, and range-separated hybrid functionals. The training methodology employed differs from conventional approaches in 2 main ways: (1) Instead of uniformly truncating the exchange, same-spin correlation, and opposite-spin correlation functional inhomogeneity correction factors, all possible fits up to fourth order are considered, and (2) Instead of selecting the optimal functionals based solely on their training set performance, the fits are validated on an independent test set and ranked based on their overall performance on the trainingmore » and test sets. The 3 different methods of accounting for exchange are trained both with and without dispersion corrections (DFT-D2 and VV10), resulting in a total of 491 508 candidate functionals. For each of the 9 functional classes considered, the results illustrate the trade-off between improved training set performance and diminished transferability. Since all 491 508 functionals are uniformly trained and tested, this methodology allows the relative strengths of each type of functional to be consistently compared and contrasted. Finally, the range-separated hybrid GGA functional paired with the VV10 nonlocal correlation functional emerges as the most accurate form for the present training and test sets, which span thermochemical energy differences, reaction barriers, and intermolecular interactions involving lighter main group elements.« less

  5. Difficulty of distinguishing product states locally

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.

    2017-01-01

    Nonlocality without entanglement is a rather counterintuitive phenomenon in which information may be encoded entirely in product (unentangled) states of composite quantum systems in such a way that local measurement of the subsystems is not enough for optimal decoding. For simple examples of pure product states, the gap in performance is known to be rather small when arbitrary local strategies are allowed. Here we restrict to local strategies readily achievable with current technology: those requiring neither a quantum memory nor joint operations. We show that even for measurements on pure product states, there can be a large gap between such strategies and theoretically optimal performance. Thus, even in the absence of entanglement, physically realizable local strategies can be far from optimal for extracting quantum information.

  6. Dynamic behavior of acoustic metamaterials and metaconfigured structures with local oscillators

    NASA Astrophysics Data System (ADS)

    Manimala, James Mathew

    Dynamic behavior of acoustic metamaterials (AM) and metaconfigured structures (MCS) with various oscillator-type microstructures or local attachments was investigated. AM derive their unusual elastic wave manipulation capabilities not just from material constituents but more so from engineered microstructural configurations. Depending on the scale of implementation, these "microstructures" may be deployed as microscopic inclusions in metacomposites or even as complex endo-structures within load-bearing exo-structures in MCS. The frequency-dependent negative effective-mass exhibited by locally resonant microstructures when considered as a single degree of freedom system was experimentally verified using a structure with an internal mass-spring resonator. AM constructed by incorporating resonators in a host material display spatial attenuation of harmonic stress waves within a tunable bandgap frequency range. An apparent damping coefficient was derived to compare the degree of attenuation achieved in these wholly elastic AM to equivalent conventionally damped models illustrating their feasibility as stiff structures that simultaneously act as effective damping elements. Parametric studies were performed using simulations to design and construct MCS with attached resonators for dynamic load mitigation applications. 98% payload isolation at resonance (7 Hz) was experimentally attained using a low-frequency vibration isolator with tip-loaded cantilever beam resonators. Pendulum impact tests on a resonator stack substantiated a peak transmitted stress reduction of about 60% and filtering of the resonator frequencies in the transmitted spectrum. Drop-tower tests were done to gauge the shock mitigation performance of an AM-inspired infrastructural building-block with internal resonators. Proof-of-concept experiments using an array of multifunctional resonators demonstrate the possibility of integrating energy harvesting and transducer capabilities. Stress wave attenuation in locally dissipative AM with various damped oscillator microstructures was studied using mechanical lattice models. The presence of damping was represented by a complex effective-mass. Analytical transmissibilities and numerical verifications were obtained for Kelvin-Voigt-type, Maxwell-type and Zener-type oscillators. Although peak attenuation at resonance is diminished, broadband attenuation was found to be achievable without increasing mass ratio, obviating the bandgap width limitations of locally resonant AM. Static and frequency-dependent measures of optimal damping that maximize the attenuation characteristics were established. A transitional value for the excitation frequency was identified within the locally resonant bandgap, above which there always exists an optimal amount of damping that renders the attenuation for the dissipative AM greater than that for the locally resonant case. AM with nonlinear stiffnesses were also investigated. For a base-excited two degree of freedom system consisting of a master structure and a Duffing-type oscillator, approximate transmissibility was derived, verified using simulations and compared to its equivalent damped model. Analytical solutions for dispersion curve shifts in nonlinear chains with linear resonators and in linear chains with nonlinear oscillators were obtained using perturbation analysis and first order approximations for cubic hardening and softening cases. Amplitude-activated alterations in bandgap width and the possibility of phenomena such as branch curling and overtaking were observed. Device implications of nonlinear AM as amplitude-dependent filters and direction-biased waveguides were examined using simulations.

  7. Application of the optimal homotopy asymptotic method to nonlinear Bingham fluid dampers

    NASA Astrophysics Data System (ADS)

    Marinca, Vasile; Ene, Remus-Daniel; Bereteu, Liviu

    2017-10-01

    Dynamic response time is an important feature for determining the performance of magnetorheological (MR) dampers in practical civil engineering applications. The objective of this paper is to show how to use the Optimal Homotopy Asymptotic Method (OHAM) to give approximate analytical solutions of the nonlinear differential equation of a modified Bingham model with non-viscous exponential damping. Our procedure does not depend upon small parameters and provides us with a convenient way to optimally control the convergence of the approximate solutions. OHAM is very efficient in practice for ensuring very rapid convergence of the solution after only one iteration and with a small number of steps.

  8. Legendre spectral-collocation method for solving some types of fractional optimal control problems

    PubMed Central

    Sweilam, Nasser H.; Al-Ajami, Tamer M.

    2014-01-01

    In this paper, the Legendre spectral-collocation method was applied to obtain approximate solutions for some types of fractional optimal control problems (FOCPs). The fractional derivative was described in the Caputo sense. Two different approaches were presented, in the first approach, necessary optimality conditions in terms of the associated Hamiltonian were approximated. In the second approach, the state equation was discretized first using the trapezoidal rule for the numerical integration followed by the Rayleigh–Ritz method to evaluate both the state and control variables. Illustrative examples were included to demonstrate the validity and applicability of the proposed techniques. PMID:26257937

  9. Computational methods for optimal linear-quadratic compensators for infinite dimensional discrete-time systems

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation theory and computational methods are developed for the determination of optimal linear-quadratic feedback control, observers and compensators for infinite dimensional discrete-time systems. Particular attention is paid to systems whose open-loop dynamics are described by semigroups of operators on Hilbert spaces. The approach taken is based on the finite dimensional approximation of the infinite dimensional operator Riccati equations which characterize the optimal feedback control and observer gains. Theoretical convergence results are presented and discussed. Numerical results for an example involving a heat equation with boundary control are presented and used to demonstrate the feasibility of the method.

  10. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  11. Rational approximations to rational models: alternative algorithms for category learning.

    PubMed

    Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J

    2010-10-01

    Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.

  12. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    PubMed

    Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart

    2016-01-01

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal.

  13. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications

    PubMed Central

    Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R.; Taylor, Jeremy F.; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart

    2016-01-01

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal. PMID:27583971

  14. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  15. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  16. Comments on localized and integral localized approximations in spherical coordinates

    NASA Astrophysics Data System (ADS)

    Gouesbet, Gérard; Lock, James A.

    2016-08-01

    Localized approximation procedures are efficient ways to evaluate beam shape coefficients of laser beams, and are particularly useful when other methods are ineffective or inefficient. Comments on these procedures are, however, required in order to help researchers make correct decisions concerning their use. This paper has the flavor of a short review and takes the opportunity to attract the attention of the readers to a required refinement of terminology.

  17. Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions

    NASA Technical Reports Server (NTRS)

    Gilland, James H.

    1991-01-01

    The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.

  18. COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y.; Borland, Michael

    Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.

  19. Multidimensional stochastic approximation using locally contractive functions

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1975-01-01

    A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.

  20. A Simplified GCS-DCSK Modulation and Its Performance Optimization

    NASA Astrophysics Data System (ADS)

    Xu, Weikai; Wang, Lin; Chi, Chong-Yung

    2016-12-01

    In this paper, a simplified Generalized Code-Shifted Differential Chaos Shift Keying (GCS-DCSK) whose transmitter never needs any delay circuits, is proposed. However, its performance is deteriorated because the orthogonality between substreams cannot be guaranteed. In order to optimize its performance, the system model of the proposed GCS-DCSK with power allocations on substreams is presented. An approximate bit error rate (BER) expression of the proposed model, which is a function of substreams’ power, is derived using Gaussian Approximation. Based on the BER expression, an optimal power allocation strategy between information substreams and reference substream is obtained. Simulation results show that the BER performance of the proposed GCS-DCSK with the optimal power allocation can be significantly improved when the number of substreams M is large.

  1. Optimal symmetric flight studies

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Menon, P. K. A.; Bilimoria, K. D.; Cliff, E. M.; Kelley, H. J.

    1985-01-01

    Several topics in optimal symmetric flight of airbreathing vehicles are examined. In one study, an approximation scheme designed for onboard real-time energy management of climb-dash is developed and calculations for a high-performance aircraft presented. In another, a vehicle model intermediate in complexity between energy and point-mass models is explored and some quirks in optimal flight characteristics peculiar to the model uncovered. In yet another study, energy-modelling procedures are re-examined with a view to stretching the range of validity of zeroth-order approximation by special choice of state variables. In a final study, time-fuel tradeoffs in cruise-dash are examined for the consequences of nonconvexities appearing in the classical steady cruise-dash model. Two appendices provide retrospective looks at two early publications on energy modelling and related optimal control theory.

  2. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  3. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    PubMed

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  5. Human small breast epithelial mucin: the promise of a new breast tumor biomarker.

    PubMed

    Hubé, F; Mutawe, M; Leygue, E; Myal, Y

    2004-12-01

    Breast cancer remains one of the most frequently diagnosed cancers today. In developed countries, one in eight women is expected to present with breast cancer within her lifetime and an estimated 1,000,000 cases are detected each year worldwide (Canadian Cancer Statistics, http://www.cancer.ca/vgn/images/ portal/cit_86751114/14/33/1959864 11niw_stats2004_en.pdf). For women with recurrent disease, the median time of survival is about 2 years. Despite optimal surgery, adjuvant irradiation, hormonal treatment, and chemotherapy, approximately 30% of patients with localized breast cancer finally develop distant metastases. Early detection, which enables intervention at a localized and potentially curable stage, remains a central goal in breast cancer treatment. Indeed, the 5-year survival rate for women with breast cancer has been shown to increase dramatically when the disease is diagnosed at an early stage: from less than 25% in women with disseminated cancer to about 75% in patients with regional disease and over 95% in women with a localized tumor (Breast Cancer Facts and Figures, 2001-2002, http://www.cancer.org/downloads/STT/BrCaFF 2001.pdf). Unfortunately, only 60% of all breast cancers are diagnosed at a local stage. Any improvement in early detection through identification of tumor biomarkers would have a significant impact on reducing overall breast cancer mortality.

  6. On the mechanism of bandgap formation in locally resonant finite elastic metamaterials

    NASA Astrophysics Data System (ADS)

    Sugino, Christopher; Leadenham, Stephen; Ruzzene, Massimo; Erturk, Alper

    2016-10-01

    Elastic/acoustic metamaterials made from locally resonant arrays can exhibit bandgaps at wavelengths much longer than the lattice size for various applications spanning from low-frequency vibration/sound attenuation to wave guiding and filtering in mechanical and electromechanical devices. For an effective use of such locally resonant metamaterial concepts in finite structures, it is required to bridge the gap between the lattice dispersion characteristics and modal behavior of the host structure with its resonators. To this end, we develop a novel argument for bandgap formation in finite-length elastic metamaterial beams, relying on the modal analysis and the assumption of infinitely many resonators. We show that the dual problem to wave propagation through an infinite periodic beam is the modal analysis of a finite beam with an infinite number of resonators. A simple formula that depends only on the resonator natural frequency and total mass ratio is derived for placing the bandgap in a desired frequency range, yielding an analytical insight and a rule of thumb for design purposes. A method for understanding the importance of a resonator location and mass is discussed in the context of a Riemann sum approximation of an integral, and a method for determining the optimal number of resonators for a given set of boundary conditions and target frequency is introduced. The simulations of the theoretical framework are validated by experiments for bending vibrations of a locally resonant cantilever beam.

  7. Comparative study of various normal mode analysis techniques based on partial Hessians.

    PubMed

    Ghysels, An; Van Speybroeck, Veronique; Pauwels, Ewald; Catak, Saron; Brooks, Bernard R; Van Neck, Dimitri; Waroquier, Michel

    2010-04-15

    Standard normal mode analysis becomes problematic for complex molecular systems, as a result of both the high computational cost and the excessive amount of information when the full Hessian matrix is used. Several partial Hessian methods have been proposed in the literature, yielding approximate normal modes. These methods aim at reducing the computational load and/or calculating only the relevant normal modes of interest in a specific application. Each method has its own (dis)advantages and application field but guidelines for the most suitable choice are lacking. We have investigated several partial Hessian methods, including the Partial Hessian Vibrational Analysis (PHVA), the Mobile Block Hessian (MBH), and the Vibrational Subsystem Analysis (VSA). In this article, we focus on the benefits and drawbacks of these methods, in terms of the reproduction of localized modes, collective modes, and the performance in partially optimized structures. We find that the PHVA is suitable for describing localized modes, that the MBH not only reproduces localized and global modes but also serves as an analysis tool of the spectrum, and that the VSA is mostly useful for the reproduction of the low frequency spectrum. These guidelines are illustrated with the reproduction of the localized amine-stretch, the spectrum of quinine and a bis-cinchona derivative, and the low frequency modes of the LAO binding protein. 2009 Wiley Periodicals, Inc.

  8. Comparative Study of Various Normal Mode Analysis Techniques Based on Partial Hessians

    PubMed Central

    GHYSELS, AN; VAN SPEYBROECK, VERONIQUE; PAUWELS, EWALD; CATAK, SARON; BROOKS, BERNARD R.; VAN NECK, DIMITRI; WAROQUIER, MICHEL

    2014-01-01

    Standard normal mode analysis becomes problematic for complex molecular systems, as a result of both the high computational cost and the excessive amount of information when the full Hessian matrix is used. Several partial Hessian methods have been proposed in the literature, yielding approximate normal modes. These methods aim at reducing the computational load and/or calculating only the relevant normal modes of interest in a specific application. Each method has its own (dis)advantages and application field but guidelines for the most suitable choice are lacking. We have investigated several partial Hessian methods, including the Partial Hessian Vibrational Analysis (PHVA), the Mobile Block Hessian (MBH), and the Vibrational Subsystem Analysis (VSA). In this article, we focus on the benefits and drawbacks of these methods, in terms of the reproduction of localized modes, collective modes, and the performance in partially optimized structures. We find that the PHVA is suitable for describing localized modes, that the MBH not only reproduces localized and global modes but also serves as an analysis tool of the spectrum, and that the VSA is mostly useful for the reproduction of the low frequency spectrum. These guidelines are illustrated with the reproduction of the localized amine-stretch, the spectrum of quinine and a bis-cinchona derivative, and the low frequency modes of the LAO binding protein. PMID:19813181

  9. Optimal design of geodesically stiffened composite cylindrical shells

    NASA Technical Reports Server (NTRS)

    Gendron, G.; Guerdal, Z.

    1992-01-01

    An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.

  10. An image warping technique for rodent brain MRI-histology registration based on thin-plate splines with landmark optimization

    NASA Astrophysics Data System (ADS)

    Liu, Yutong; Uberti, Mariano; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael D.

    2009-02-01

    Coregistration of in vivo magnetic resonance imaging (MRI) with histology provides validation of disease biomarker and pathobiology studies. Although thin-plate splines are widely used in such image registration, point landmark selection is error prone and often time-consuming. We present a technique to optimize landmark selection for thin-plate splines and demonstrate its usefulness in warping rodent brain MRI to histological sections. In this technique, contours are drawn on the corresponding MRI slices and images of histological sections. The landmarks are extracted from the contours by equal spacing then optimized by minimizing a cost function consisting of the landmark displacement and contour curvature. The technique was validated using simulation data and brain MRI-histology coregistration in a murine model of HIV-1 encephalitis. Registration error was quantified by calculating target registration error (TRE). The TRE of approximately 8 pixels for 20-80 landmarks without optimization was stable at different landmark numbers. The optimized results were more accurate at low landmark numbers (TRE of approximately 2 pixels for 50 landmarks), while the accuracy decreased (TRE approximately 8 pixels for larger numbers of landmarks (70- 80). The results demonstrated that registration accuracy decreases with the increasing landmark numbers offering more confidence in MRI-histology registration using thin-plate splines.

  11. Existence and discrete approximation for optimization problems governed by fractional differential equations

    NASA Astrophysics Data System (ADS)

    Bai, Yunru; Baleanu, Dumitru; Wu, Guo-Cheng

    2018-06-01

    We investigate a class of generalized differential optimization problems driven by the Caputo derivative. Existence of weak Carathe ´odory solution is proved by using Weierstrass existence theorem, fixed point theorem and Filippov implicit function lemma etc. Then a numerical approximation algorithm is introduced, and a convergence theorem is established. Finally, a nonlinear programming problem constrained by the fractional differential equation is illustrated and the results verify the validity of the algorithm.

  12. Combined adaptive multiple subtraction based on optimized event tracing and extended wiener filtering

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo

    2017-06-01

    The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.

  13. Band gap characterization of ternary BBi1-xNx (0≤x≤1) alloys using modified Becke-Johnson (mBJ) potential

    NASA Astrophysics Data System (ADS)

    Yalcin, Battal G.

    2015-04-01

    The semi-local Becke-Johnson (BJ) exchange-correlation potential and its modified form proposed by Tran and Blaha have attracted a lot of interest recently because of the surprisingly accurate band gaps they can deliver for many semiconductors and insulators (e.g., sp semiconductors, noble-gas solids, and transition-metal oxides). The structural and electronic properties of ternary alloys BBi1-xNx (0≤x≤1) in zinc-blende phase have been reported in this study. The results of the studied binary compounds (BN and BBi) and ternary alloys BBi1-xNx structures are presented by means of density functional theory. The exchange and correlation effects are taken into account by using the generalized gradient approximation (GGA) functional of Wu and Cohen (WC) which is an improved form of the most popular Perdew-Burke-Ernzerhof (PBE). For electronic properties the modified Becke-Johnson (mBJ) potential, which is more accurate than standard semi-local LDA and PBE calculations, has been chosen. Geometric optimization has been implemented before the volume optimization calculations for all the studied alloys structure. The obtained equilibrium lattice constants of the studied binary compounds are in coincidence with experimental works. And, the variation of the lattice parameter of ternary alloys BBi1-xNx almost perfectly matches with Vegard's law. The spin-orbit interaction (SOI) has been also considered for structural and electronic calculations and the results are compared to those of non-SOI calculations.

  14. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    PubMed

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  15. Approximate Analysis for Interlaminar Stresses in Composite Structures with Thickness Discontinuities

    NASA Technical Reports Server (NTRS)

    Rose, Cheryl A.; Starnes, James H., Jr.

    1996-01-01

    An efficient, approximate analysis for calculating complete three-dimensional stress fields near regions of geometric discontinuities in laminated composite structures is presented. An approximate three-dimensional local analysis is used to determine the detailed local response due to far-field stresses obtained from a global two-dimensional analysis. The stress results from the global analysis are used as traction boundary conditions for the local analysis. A generalized plane deformation assumption is made in the local analysis to reduce the solution domain to two dimensions. This assumption allows out-of-plane deformation to occur. The local analysis is based on the principle of minimum complementary energy and uses statically admissible stress functions that have an assumed through-the-thickness distribution. Examples are presented to illustrate the accuracy and computational efficiency of the local analysis. Comparisons of the results of the present local analysis with the corresponding results obtained from a finite element analysis and from an elasticity solution are presented. These results indicate that the present local analysis predicts the stress field accurately. Computer execution-times are also presented. The demonstrated accuracy and computational efficiency of the analysis make it well suited for parametric and design studies.

  16. Conjugate-gradient optimization method for orbital-free density functional calculations.

    PubMed

    Jiang, Hong; Yang, Weitao

    2004-08-01

    Orbital-free density functional theory as an extension of traditional Thomas-Fermi theory has attracted a lot of interest in the past decade because of developments in both more accurate kinetic energy functionals and highly efficient numerical methodology. In this paper, we developed a conjugate-gradient method for the numerical solution of spin-dependent extended Thomas-Fermi equation by incorporating techniques previously used in Kohn-Sham calculations. The key ingredient of the method is an approximate line-search scheme and a collective treatment of two spin densities in the case of spin-dependent extended Thomas-Fermi problem. Test calculations for a quartic two-dimensional quantum dot system and a three-dimensional sodium cluster Na216 with a local pseudopotential demonstrate that the method is accurate and efficient. (c) 2004 American Institute of Physics.

  17. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  18. Exploring the statistics of magnetic reconnection X-points in kinetic particle-in-cell turbulence

    NASA Astrophysics Data System (ADS)

    Haggerty, C. C.; Parashar, T. N.; Matthaeus, W. H.; Shay, M. A.; Yang, Y.; Wan, M.; Wu, P.; Servidio, S.

    2017-10-01

    Magnetic reconnection is a ubiquitous phenomenon in turbulent plasmas. It is an important part of the turbulent dynamics and heating of space and astrophysical plasmas. We examine the statistics of magnetic reconnection using a quantitative local analysis of the magnetic vector potential, previously used in magnetohydrodynamics simulations, and now employed to fully kinetic particle-in-cell (PIC) simulations. Different ways of reducing the particle noise for analysis purposes, including multiple smoothing techniques, are explored. We find that a Fourier filter applied at the Debye scale is an optimal choice for analyzing PIC data. Finally, we find a broader distribution of normalized reconnection rates compared to the MHD limit with rates as large as 0.5 but with an average of approximately 0.1.

  19. Novel palmprint representations for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Li, Hengjian; Dong, Jiwen; Li, Jinping; Wang, Lei

    2015-02-01

    In this paper, we propose a novel palmprint recognition algorithm. Firstly, the palmprint images are represented by the anisotropic filter. The filters are built on Gaussian functions along one direction, and on second derivative of Gaussian functions in the orthogonal direction. Also, this choice is motivated by the optimal joint spatial and frequency localization of the Gaussian kernel. Therefore,they can better approximate the edge or line of palmprint images. A palmprint image is processed with a bank of anisotropic filters at different scales and rotations for robust palmprint features extraction. Once these features are extracted, subspace analysis is then applied to the feature vectors for dimension reduction as well as class separability. Experimental results on a public palmprint database show that the accuracy could be improved by the proposed novel representations, compared with Gabor.

  20. The investigation of an LSPR refractive index sensor based on periodic gold nanorings array

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Sun, Xiaohong; Ding, Mingjie; Peng, Gangding; Qi, Yongle; Wang, Yile; Ren, Jie

    2018-01-01

    An on-chip refractive index (RI) sensor, which is based on the localized surface plasmon resonance (LSPR) of periodic gold nanorings array, is presented. The structure parameters and performance of LSPR-based sensors are optimized by analyzing and comparing the LSPR extinction spectra. The mechanism of the enhancement of plasma resonance in a ring array is discussed by the simulation results. A feasible preparation scheme of the nanorings array is proposed and verified by coating a gold film and etching on the photonic crystals. Based on the optimum sensing structure, an RI sensor is constructed with a RI sensitivity of 577 nm/refractive index unit (RIU) and a figure of merit (FOM) of 6.1, which is approximately 2 times that of previous reports.

  1. Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.

    PubMed

    McIntosh, Chris; Hamarneh, Ghassan

    2012-01-01

    We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.

  2. Structural Characterizations of Glycerol Kinase: Unraveling Phosphorylation-Induced Long-Range Activation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, Joanne I.; Kettering, Regina; Saxl, Ruth

    2009-09-11

    Glycerol metabolism provides a central link between sugar and fatty acid catabolism. In most bacteria, glycerol kinase plays a crucial role in regulating channel/facilitator-dependent uptake of glycerol into the cell. In the firmicute Enterococcus casseliflavus, this enzyme's activity is enhanced by phosphorylation of the histidine residue (His232) located in its activation loop, approximately 25 A from its catalytic cleft. We reported earlier that some mutations of His232 altered enzyme activities; we present here the crystal structures of these mutant GlpK enzymes. The structure of a mutant enzyme with enhanced enzymatic activity, His232Arg, reveals that residues at the catalytic cleft aremore » more optimally aligned to bind ATP and mediate phosphoryl transfer. Specifically, the position of Arg18 in His232Arg shifts by approximately 1 A when compared to its position in wild-type (WT), His232Ala, and His232Glu enzymes. This new conformation of Arg18 is more optimally positioned at the presumed gamma-phosphate location of ATP, close to the glycerol substrate. In addition to structural changes exhibited at the active site, the conformational stability of the activation loop is decreased, as reflected by an approximately 35% increase in B factors ('thermal factors') in a mutant enzyme displaying diminished activity, His232Glu. Correlating conformational changes to alteration of enzymatic activities in the mutant enzymes identifies distinct localized regions that can have profound effects on intramolecular signal transduction. Alterations in pairwise interactions across the dimer interface can communicate phosphorylation states over 25 A from the activation loop to the catalytic cleft, positioning Arg18 to form favorable interactions at the beta,gamma-bridging position with ATP. This would offset loss of the hydrogen bonds at the gamma-phosphate of ATP during phosphoryl transfer to glycerol, suggesting that appropriate alignment of the second substrate of glycerol kinase, the ATP molecule, may largely determine the rate of glycerol 3-phosphate production.« less

  3. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  4. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1987-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary systems. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  5. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1988-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary schemes. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  6. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  7. Health-related quality of life, optimism, and coping strategies in persons suffering from localized scleroderma.

    PubMed

    Szramka-Pawlak, B; Dańczak-Pazdrowska, A; Rzepa, T; Szewczyk, A; Sadowska-Przytocka, A; Żaba, R

    2013-01-01

    The clinical course of localized scleroderma may consist of bodily deformations, and bodily functions may also be affected. Additionally, the secondary lesions, such as discoloration, contractures, and atrophy, are unlikely to regress. The aforementioned symptoms and functional disturbances may decrease one's quality of life (QoL). Although much has been mentioned in the medical literature regarding QoL in persons suffering from dermatologic diseases, no data specifically describing patients with localized scleroderma exist. The aim of the study was to explore QoL in localized scleroderma patients and to examine their coping strategies in regard to optimism and QoL. The study included 41 patients with localized scleroderma. QoL was evaluated using the SKINDEX questionnaire, and levels of dispositional optimism were assessed using the Life Orientation Test-Revised. In addition, individual coping strategy was determined using the Mini-MAC scale and physical condition was assessed using the Localized Scleroderma Severity Index. The mean QoL score amounted to 51.10 points, with mean scores for individual components as follows: symptoms = 13.49 points, emotions = 21.29 points, and functioning = 16.32 points. A relationship was detected between QoL and the level of dispositional optimism as well as with coping strategies known as anxious preoccupation and helplessness-hopelessness. Higher levels of optimism predicted a higher general QoL. In turn, greater intensity of anxious preoccupied and helpless-hopeless behaviors predicted a lower QoL. Based on these results, it may be stated that localized scleroderma patients have a relatively high QoL, which is accompanied by optimism as well as a lower frequency of behaviors typical of emotion-focused coping strategies.

  8. Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya

    2003-01-01

    The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic analysis, engine cycle analysis, propulsion data interpolation, mission performance, airfield length for landing and takeoff, noise footprint, and others.

  9. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  10. Free-form Airfoil Shape Optimization Under Uncertainty Using Maximum Expected Value and Second-order Second-moment Strategies

    NASA Technical Reports Server (NTRS)

    Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.

  11. Multi-level optimization of a beam-like space truss utilizing a continuum model

    NASA Technical Reports Server (NTRS)

    Yates, K.; Gurdal, Z.; Thangjitham, S.

    1992-01-01

    A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.

  12. Tuning transport properties of graphene three-terminal structures by mechanical deformation

    NASA Astrophysics Data System (ADS)

    Torres, V.; Faria, D.; Latgé, A.

    2018-04-01

    Straintronic devices made of carbon-based materials have been pushed up due to the graphene high mechanical flexibility and the possibility of interesting changes in transport properties. Properly designed strained systems have been proposed to allow optimized transport responses that can be explored in experimental realizations. In multiterminal systems, comparisons between schemes with different geometries are important to characterize the modifications introduced by mechanical deformations, especially if the deformations are localized at a central part of the system or extended in a large region. Then, in the present analysis, we study the strain effects on the transport properties of triangular and hexagonal graphene flakes, with zigzag and armchair edges, connected to three electronic terminals, formed by semi-infinite graphene nanoribbons. Using the Green's function formalism with circular renormalization schemes, and a single band tight-binding approximation, we find that resonant tunneling transport becomes relevant and is more affected by localized deformations in the hexagonal graphene flakes. Moreover, triangular systems with deformation extended to the leads, like longitudinal three-folded type, are shown as an interesting scenario for building nanoscale waveguides for electronic current.

  13. Nature, diffraction-free propagation via space-time correlations, and nonlinear generation of time-diffracting light beams

    NASA Astrophysics Data System (ADS)

    Porras, Miguel A.

    2018-06-01

    We investigate the properties of the recently introduced time-diffracting (TD) beams in free space. They are shown to be paraxial and quasimonochromatic realizations of spatiotemporal localized waves traveling undistorted at arbitrary speeds. The paraxial and quasimonochromatic regime is shown to be necessary to observe what can properly be named diffraction in time. In this regime, the spatiotemporal frequency correlations for diffraction-free propagation are approximated by parabolic correlations. Time-diffracting beams of finite energy traveling at quasiluminal velocities are seen to form substantially longer foci or needles of light than the so-called abruptly focusing and defocusing needle of light or limiting TD beam of infinite speed. Exploring the properties of TD beams under Lorentz transformations and their transformation by paraxial optical systems, we realize that the nonlinear polarization of material media induced by a strongly localized fundamental pump wave generates a TD beam at its second harmonic, whose diffraction-free behavior as a needle of light in free space can be optimized with a standard 4 f -imager system.

  14. A study of topologies and protocols for fiber optic local area network

    NASA Technical Reports Server (NTRS)

    Yeh, C.; Gerla, M.; Rodrigues, P.

    1985-01-01

    The emergence of new applications requiring high data traffic necessitates the development of high speed local area networks. Optical fiber is selected as the transmission medium due to its inherent advantages over other possible media and the dual optical bus architecture is shown to be the most suitable topology. Asynchronous access protocols, including token, random, hybrid random/token, and virtual token schemes, are developed and analyzed. Exact expressions for insertion delay and utilization at light and heavy load are derived, and intermediate load behavior is investigated by simulation. A new tokenless adaptive scheme whose control depends only on the detection of activity on the channel is shown to outperform round-robin schemes under uneven loads and multipacket traffic and to perform optimally at light load. An approximate solution to the queueing delay for an oscillating polling scheme under chaining is obtained and results are compared with simulation. Solutions to the problem of building systems with a large number of stations are presented, including maximization of the number of optical couplers, and the use of passive star/bus topologies, bridges and gateways.

  15. Finding Optimal Gains In Linear-Quadratic Control Problems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Scheid, Robert E., Jr.

    1990-01-01

    Analytical method based on Volterra factorization leads to new approximations for optimal control gains in finite-time linear-quadratic control problem of system having infinite number of dimensions. Circumvents need to analyze and solve Riccati equations and provides more transparent connection between dynamics of system and optimal gain.

  16. Neural dynamic programming and its application to control systems

    NASA Astrophysics Data System (ADS)

    Seong, Chang-Yun

    There are few general practical feedback control methods for nonlinear MIMO (multi-input-multi-output) systems, although such methods exist for their linear counterparts. Neural Dynamic Programming (NDP) is proposed as a practical design method of optimal feedback controllers for nonlinear MIMO systems. NDP is an offspring of both neural networks and optimal control theory. In optimal control theory, the optimal solution to any nonlinear MIMO control problem may be obtained from the Hamilton-Jacobi-Bellman equation (HJB) or the Euler-Lagrange equations (EL). The two sets of equations provide the same solution in different forms: EL leads to a sequence of optimal control vectors, called Feedforward Optimal Control (FOC); HJB yields a nonlinear optimal feedback controller, called Dynamic Programming (DP). DP produces an optimal solution that can reject disturbances and uncertainties as a result of feedback. Unfortunately, computation and storage requirements associated with DP solutions can be problematic, especially for high-order nonlinear systems. This dissertation presents an approximate technique for solving the DP problem based on neural network techniques that provides many of the performance benefits (e.g., optimality and feedback) of DP and benefits from the numerical properties of neural networks. We formulate neural networks to approximate optimal feedback solutions whose existence DP justifies. We show the conditions under which NDP closely approximates the optimal solution. Finally, we introduce the learning operator characterizing the learning process of the neural network in searching the optimal solution. The analysis of the learning operator provides not only a fundamental understanding of the learning process in neural networks but also useful guidelines for selecting the number of weights of the neural network. As a result, NDP finds---with a reasonable amount of computation and storage---the optimal feedback solutions to nonlinear MIMO control problems that would be very difficult to solve with DP. NDP was demonstrated on several applications such as the lateral autopilot logic for a Boeing 747, the minimum fuel control of a double-integrator plant with bounded control, the backward steering of a two-trailer truck, and the set-point control of a two-link robot arm.

  17. Partial-Wave Representations of Laser Beams for Use in Light-Scattering Calculations

    NASA Technical Reports Server (NTRS)

    Gouesbet, Gerard; Lock, James A.; Grehan, Gerard

    1995-01-01

    In the framework of generalized Lorenz-Mie theory, laser beams are described by sets of beam-shape coefficients. The modified localized approximation to evaluate these coefficients for a focused Gaussian beam is presented. A new description of Gaussian beams, called standard beams, is introduced. A comparison is made between the values of the beam-shape coefficients in the framework of the localized approximation and the beam-shape coefficients of standard beams. This comparison leads to new insights concerning the electromagnetic description of laser beams. The relevance of our discussion is enhanced by a demonstration that the localized approximation provides a very satisfactory description of top-hat beams as well.

  18. Weak limit of the three-state quantum walk on the line

    NASA Astrophysics Data System (ADS)

    Falkner, Stefan; Boettcher, Stefan

    2014-07-01

    We revisit the one-dimensional discrete time quantum walk with three states and the Grover coin, the simplest model that exhibits localization in a quantum walk. We derive analytic expressions for the localization and a long-time approximation for the entire probability density function (PDF). We find the possibility for asymmetric localization to the extreme that it vanishes completely on one site of the initial conditions. We also connect the time-averaged approximation of the PDF found by Inui et al. [Phys. Rev. E 72, 056112 (2005), 10.1103/PhysRevE.72.056112] to a spatial average of the walk. We show that this smoothed approximation predicts moments of the real PDF accurately.

  19. Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Wood, William A.; vanLeer, Bram

    1999-01-01

    A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.

  20. X-ray photoelectron spectrum and electronic properties of a noncentrosymmetric chalcopyrite compound HgGa(2)S(4): LDA, GGA, and EV-GGA.

    PubMed

    Reshak, Ali Hussain; Khenata, R; Kityk, I V; Plucinski, K J; Auluck, S

    2009-04-30

    An all electron full potential linearized augmented plane wave method has been applied for a theoretical study of the band structure, density of states, and electron charge density of a noncentrosymmetric chalcopyrite compound HgGa(2)S(4) using three different approximations for the exchange correlation potential. Our calculations show that the valence band maximum (VBM) and conduction band minimum (CBM) are located at Gamma resulting in a direct energy gap of about 2.0, 2.2, and 2.8 eV for local density approximation (LDA), generalized gradient approximation (GGA), and Engel-Vosko (EVGGA) compared to the experimental value of 2.84 eV. We notice that EVGGA shows excellent agreement with the experimental data. This agreement is attributed to the fact that the Engel-Vosko GGA formalism optimizes the corresponding potential for band structure calculations. We make a detailed comparison of the density of states deduced from the X-ray photoelectron spectra with our calculations. We find that there is a strong covalent bond between the Hg and S atoms and Ga and S atoms. The Hg-Hg, Ga-Ga, and S-S bonds are found to be weaker than the Hg-S and Ga-S bonds showing that a covalent bond exists between Hg and S atoms and Ga and S atoms.

Top