Sample records for classical optimization methods

  1. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  2. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  3. Unraveling Quantum Annealers using Classical Hardness

    PubMed Central

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  4. INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN

    EPA Science Inventory

    The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...

  5. Dynamic optimization and its relation to classical and quantum constrained systems

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo

    2017-08-01

    We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.

  6. First-order design of geodetic networks using the simulated annealing method

    NASA Astrophysics Data System (ADS)

    Berné, J. L.; Baselga, S.

    2004-09-01

    The general problem of the optimal design for a geodetic network subject to any extrinsic factors, namely the first-order design problem, can be dealt with as a numeric optimization problem. The classic theory of this problem and the optimization methods are revised. Then the innovative use of the simulated annealing method, which has been successfully applied in other fields, is presented for this classical geodetic problem. This method, belonging to iterative heuristic techniques in operational research, uses a thermodynamical analogy to crystalline networks to offer a solution that converges probabilistically to the global optimum. Basic formulation and some examples are studied.

  7. Optimizing Robinson Operator with Ant Colony Optimization As a Digital Image Edge Detection Method

    NASA Astrophysics Data System (ADS)

    Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin

    2017-12-01

    Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic method for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection method with the approach graph with Ant Colony Optimization algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do optimization of operator Robinson with Ant Colony Optimization then compare the output and generated the inferred extent of Ant Colony Optimization can improve result of edge detection that has not been optimized and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony Optimization method produces images with a more assertive and thick edge. Ant Colony Optimization method is able to be used as a method for optimizing operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.

  8. Quantum approach to classical statistical mechanics.

    PubMed

    Somma, R D; Batista, C D; Ortiz, G

    2007-07-20

    We present a new approach to study the thermodynamic properties of d-dimensional classical systems by reducing the problem to the computation of ground state properties of a d-dimensional quantum model. This classical-to-quantum mapping allows us to extend the scope of standard optimization methods by unifying them under a general framework. The quantum annealing method is naturally extended to simulate classical systems at finite temperatures. We derive the rates to assure convergence to the optimal thermodynamic state using the adiabatic theorem of quantum mechanics. For simulated and quantum annealing, we obtain the asymptotic rates of T(t) approximately (pN)/(k(B)logt) and gamma(t) approximately (Nt)(-c/N), for the temperature and magnetic field, respectively. Other annealing strategies are also discussed.

  9. Optimal Item Selection with Credentialing Examinations.

    ERIC Educational Resources Information Center

    Hambleton, Ronald K.; And Others

    The study compared two promising item response theory (IRT) item-selection methods, optimal and content-optimal, with two non-IRT item selection methods, random and classical, for use in fixed-length certification exams. The four methods were used to construct 20-item exams from a pool of approximately 250 items taken from a 1985 certification…

  10. A Systematic Comparison between Classical Optimal Scaling and the Two-Parameter IRT Model

    ERIC Educational Resources Information Center

    Warrens, Matthijs J.; de Gruijter, Dato N. M.; Heiser, Willem J.

    2007-01-01

    In this article, the relationship between two alternative methods for the analysis of multivariate categorical data is systematically explored. It is shown that the person score of the first dimension of classical optimal scaling correlates strongly with the latent variable for the two-parameter item response theory (IRT) model. Next, under the…

  11. An extension of the directed search domain algorithm to bilevel optimization

    NASA Astrophysics Data System (ADS)

    Wang, Kaiqiang; Utyuzhnikov, Sergey V.

    2017-08-01

    A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.

  12. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    PubMed Central

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585

  13. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem.

    PubMed

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  14. Data Analysis Techniques for Physical Scientists

    NASA Astrophysics Data System (ADS)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  15. Classical Optimal Control for Energy Minimization Based On Diffeomorphic Modulation under Observable-Response-Preserving Homotopy.

    PubMed

    Soley, Micheline B; Markmann, Andreas; Batista, Victor S

    2018-06-12

    We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.

  16. Further Development of an Optimal Design Approach Applied to Axial Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Bloodgood, V. Dale, Jr.; Groom, Nelson J.; Britcher, Colin P.

    2000-01-01

    Classical design methods involved in magnetic bearings and magnetic suspension systems have always had their limitations. Because of this, the overall effectiveness of a design has always relied heavily on the skill and experience of the individual designer. This paper combines two approaches that have been developed to aid the accuracy and efficiency of magnetostatic design. The first approach integrates classical magnetic circuit theory with modern optimization theory to increase design efficiency. The second approach uses loss factors to increase the accuracy of classical magnetic circuit theory. As an example, an axial magnetic thrust bearing is designed for minimum power.

  17. Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan

    Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less

  18. Layout optimization with algebraic multigrid methods

    NASA Technical Reports Server (NTRS)

    Regler, Hans; Ruede, Ulrich

    1993-01-01

    Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.

  19. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    NASA Astrophysics Data System (ADS)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  20. Application of an improved minimum entropy deconvolution method for railway rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei

    2018-07-01

    Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.

  1. Computational study of engine external aerodynamics as a part of multidisciplinary optimization procedure

    NASA Astrophysics Data System (ADS)

    Savelyev, Andrey; Anisimov, Kirill; Kazhan, Egor; Kursakov, Innocentiy; Lysenkov, Alexandr

    2016-10-01

    The paper is devoted to the development of methodology to optimize external aerodynamics of the engine. Optimization procedure is based on numerical solution of the Reynolds-averaged Navier-Stokes equations. As a method of optimization the surrogate based method is used. As a test problem optimal shape design of turbofan nacelle is considered. The results of the first stage, which investigates classic airplane configuration with engine located under the wing, are presented. Described optimization procedure is considered in the context of multidisciplinary optimization of the 3rd generation, developed in the project AGILE.

  2. Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images.

    PubMed

    Wang, Hongkai; Zhou, Zongwei; Li, Yingci; Chen, Zhonghua; Lu, Peiou; Wang, Wenzhi; Liu, Wanyu; Yu, Lijuan

    2017-12-01

    This study aimed to compare one state-of-the-art deep learning method and four classical machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer (NSCLC) from 18 F-FDG PET/CT images. Another objective was to compare the discriminative power of the recently popular PET/CT texture features with the widely used diagnostic features such as tumor size, CT value, SUV, image contrast, and intensity standard deviation. The four classical machine learning methods included random forests, support vector machines, adaptive boosting, and artificial neural network. The deep learning method was the convolutional neural networks (CNN). The five methods were evaluated using 1397 lymph nodes collected from PET/CT images of 168 patients, with corresponding pathology analysis results as gold standard. The comparison was conducted using 10 times 10-fold cross-validation based on the criterion of sensitivity, specificity, accuracy (ACC), and area under the ROC curve (AUC). For each classical method, different input features were compared to select the optimal feature set. Based on the optimal feature set, the classical methods were compared with CNN, as well as with human doctors from our institute. For the classical methods, the diagnostic features resulted in 81~85% ACC and 0.87~0.92 AUC, which were significantly higher than the results of texture features. CNN's sensitivity, specificity, ACC, and AUC were 84, 88, 86, and 0.91, respectively. There was no significant difference between the results of CNN and the best classical method. The sensitivity, specificity, and ACC of human doctors were 73, 90, and 82, respectively. All the five machine learning methods had higher sensitivities but lower specificities than human doctors. The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.

  3. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  4. Off-diagonal expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  5. Off-diagonal expansion quantum Monte Carlo.

    PubMed

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  6. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  7. TReacLab: An object-oriented implementation of non-intrusive splitting methods to couple independent transport and geochemical software

    NASA Astrophysics Data System (ADS)

    Jara, Daniel; de Dreuzy, Jean-Raynald; Cochepin, Benoit

    2017-12-01

    Reactive transport modeling contributes to understand geophysical and geochemical processes in subsurface environments. Operator splitting methods have been proposed as non-intrusive coupling techniques that optimize the use of existing chemistry and transport codes. In this spirit, we propose a coupler relying on external geochemical and transport codes with appropriate operator segmentation that enables possible developments of additional splitting methods. We provide an object-oriented implementation in TReacLab developed in the MATLAB environment in a free open source frame with an accessible repository. TReacLab contains classical coupling methods, template interfaces and calling functions for two classical transport and reactive software (PHREEQC and COMSOL). It is tested on four classical benchmarks with homogeneous and heterogeneous reactions at equilibrium or kinetically-controlled. We show that full decoupling to the implementation level has a cost in terms of accuracy compared to more integrated and optimized codes. Use of non-intrusive implementations like TReacLab are still justified for coupling independent transport and chemical software at a minimal development effort but should be systematically and carefully assessed.

  8. Greek classicism in living structure? Some deductive pathways in animal morphology.

    PubMed

    Zweers, G A

    1985-01-01

    Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".

  9. QSPIN: A High Level Java API for Quantum Computing Experimentation

    NASA Technical Reports Server (NTRS)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  10. Functionality limit of classical simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2015-09-01

    By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.

  11. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  12. Multitrace/singletrace formulations and Domain Decomposition Methods for the solution of Helmholtz transmission problems for bounded composite scatterers

    NASA Astrophysics Data System (ADS)

    Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin

    2017-12-01

    We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.

  13. Control optimization of a lifting body entry problem by an improved and a modified method of perturbation function. Ph.D. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Garcia, F., Jr.

    1974-01-01

    A study of the solution problem of a complex entry optimization was studied. The problem was transformed into a two-point boundary value problem by using classical calculus of variation methods. Two perturbation methods were devised. These methods attempted to desensitize the contingency of the solution of this type of problem on the required initial co-state estimates. Also numerical results are presented for the optimal solution resulting from a number of different initial co-states estimates. The perturbation methods were compared. It is found that they are an improvement over existing methods.

  14. Multi-objective shape optimization of plate structure under stress criteria based on sub-structured mixed FEM and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Garambois, Pierre; Besset, Sebastien; Jézéquel, Louis

    2015-07-01

    This paper presents a methodology for the multi-objective (MO) shape optimization of plate structure under stress criteria, based on a mixed Finite Element Model (FEM) enhanced with a sub-structuring method. The optimization is performed with a classical Genetic Algorithm (GA) method based on Pareto-optimal solutions and considers thickness distributions parameters and antagonist objectives among them stress criteria. We implement a displacement-stress Dynamic Mixed FEM (DM-FEM) for plate structure vibrations analysis. Such a model gives a privileged access to the stress within the plate structure compared to primal classical FEM, and features a linear dependence to the thickness parameters. A sub-structuring reduction method is also computed in order to reduce the size of the mixed FEM and split the given structure into smaller ones with their own thickness parameters. Those methods combined enable a fast and stress-wise efficient structure analysis, and improve the performance of the repetitive GA. A few cases of minimizing the mass and the maximum Von Mises stress within a plate structure under a dynamic load put forward the relevance of our method with promising results. It is able to satisfy multiple damage criteria with different thickness distributions, and use a smaller FEM.

  15. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  16. Neural dynamic optimization for control systems. I. Background.

    PubMed

    Seong, C Y; Widrow, B

    2001-01-01

    The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the background and motivations for the development of NDO, while the two other subsequent papers of this topic present the theory of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.

  17. Neural dynamic optimization for control systems.III. Applications.

    PubMed

    Seong, C Y; Widrow, B

    2001-01-01

    For pt.II. see ibid., p. 490-501. The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper demonstrates NDO with several applications including control of autonomous vehicles and of a robot-arm, while the two other companion papers of this topic describes the background for the development of NDO and present the theory of the method, respectively.

  18. Neural dynamic optimization for control systems.II. Theory.

    PubMed

    Seong, C Y; Widrow, B

    2001-01-01

    The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the theory of NDO, while the two other companion papers of this topic explain the background for the development of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.

  19. Optimally stopped variational quantum algorithms

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  20. A Numerical Comparison of Barrier and Modified Barrier Methods for Large-Scale Bound-Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Nash, Stephen G.; Polyak, R.; Sofer, Ariela

    1994-01-01

    When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.

  1. Extended Analytic Device Optimization Employing Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Mackey, Jonathan; Sehirlioglu, Alp; Dynsys, Fred

    2013-01-01

    Analytic optimization of a thermoelectric junction often introduces several simplifying assumptionsincluding constant material properties, fixed known hot and cold shoe temperatures, and thermallyinsulated leg sides. In fact all of these simplifications will have an effect on device performance,ranging from negligible to significant depending on conditions. Numerical methods, such as FiniteElement Analysis or iterative techniques, are often used to perform more detailed analysis andaccount for these simplifications. While numerical methods may stand as a suitable solution scheme,they are weak in gaining physical understanding and only serve to optimize through iterativesearching techniques. Analytic and asymptotic expansion techniques can be used to solve thegoverning system of thermoelectric differential equations with fewer or less severe assumptionsthan the classic case. Analytic methods can provide meaningful closed form solutions and generatebetter physical understanding of the conditions for when simplifying assumptions may be valid.In obtaining the analytic solutions a set of dimensionless parameters, which characterize allthermoelectric couples, is formulated and provide the limiting cases for validating assumptions.Presentation includes optimization of both classic rectangular couples as well as practically andtheoretically interesting cylindrical couples using optimization parameters physically meaningful toa cylindrical couple. Solutions incorporate the physical behavior for i) thermal resistance of hot andcold shoes, ii) variable material properties with temperature, and iii) lateral heat transfer through legsides.

  2. Enhanced expression and purification of camelid single domain VHH antibodies from classical inclusion bodies.

    PubMed

    Maggi, Maristella; Scotti, Claudia

    2017-08-01

    Single domain antibodies (sdAbs) are small antigen-binding domains derived from naturally occurring, heavy chain-only immunoglobulins isolated from camelid and sharks. They maintain the same binding capability of full-length IgGs but with improved thermal stability and permeability, which justifies their scientific, medical and industrial interest. Several described recombinant forms of sdAbs have been produced in different hosts and with different strategies. Here we present an optimized method for a time-saving, high yield production and extraction of a poly-histidine-tagged sdAb from Escherichia coli classical inclusion bodies. Protein expression and extraction were attempted using 4 different methods (e.g. autoinducing or IPTG-induced soluble expression, non-classical and classical inclusion bodies). The best method resulted to be expression in classical inclusion bodies and urea-mediated protein extraction which yielded 60-70 mg/l bacterial culture. The method we here describe can be of general interest for an enhanced and efficient heterologous expression of sdAbs for research and industrial purposes. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. A robust component mode synthesis method for stochastic damped vibroacoustics

    NASA Astrophysics Data System (ADS)

    Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine

    2010-01-01

    In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.

  4. Extremal Optimization: Methods Derived from Co-Evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boettcher, S.; Percus, A.G.

    1999-07-13

    We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less

  5. Configuration optimization of space structures

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos; Crivelli, Luis A.; Vandenbelt, David

    1991-01-01

    The objective is to develop a computer aid for the conceptual/initial design of aerospace structures, allowing configurations and shape to be apriori design variables. The topics are presented in viewgraph form and include the following: Kikuchi's homogenization method; a classical shape design problem; homogenization method steps; a 3D mechanical component design example; forming a homogenized finite element; a 2D optimization problem; treatment of volume inequality constraint; algorithms for the volume inequality constraint; object function derivatives--taking advantage of design locality; stiffness variations; variations of potential; and schematics of the optimization problem.

  6. Controlling the Transport of an Ion: Classical and Quantum Mechanical Solutions

    DTIC Science & Technology

    2014-07-09

    quantum systems: tools, achievements, and limitations Christiane P Koch Shortcuts to adiabaticity for an ion in a rotating radially- tight trap M Palmero...Keywords: coherent control, ion traps, quantum information, optimal control theory 1. Introduction Control methods are key enabling techniques in many...figure 6. 3.4. Feasibility analysis of quantum optimal control Numerical optimization of the wavepacket motion is expected to become necessary once

  7. Optimum tuned mass damper design using harmony search with comparison of classical methods

    NASA Astrophysics Data System (ADS)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail; Sayin, Baris

    2017-07-01

    As known, tuned mass dampers (TMDs) are added to mechanical systems in order to obtain a good vibration damping. The main aim is to reduce the maximum amplitude at the resonance state. In this study, a metaheuristic algorithm called harmony search employed for the optimum design of TMDs. As the optimization objective, the transfer function of the acceleration of the system with respect to ground acceleration was minimized. The numerical trails were conducted for 4 single degree of freedom systems and the results were compared with classical methods. As a conclusion, the proposed method is feasible and more effective than the other documented methods.

  8. A numerical method for solving a nonlinear 2-D optimal control problem with the classical diffusion equation

    NASA Astrophysics Data System (ADS)

    Mamehrashi, K.; Yousefi, S. A.

    2017-02-01

    This paper presents a numerical solution for solving a nonlinear 2-D optimal control problem (2DOP). The performance index of a nonlinear 2DOP is described with a state and a control function. Furthermore, dynamic constraint of the system is given by a classical diffusion equation. It is preferred to use the Ritz method for finding the numerical solution of the problem. The method is based upon the Legendre polynomial basis. By using this method, the given optimisation nonlinear 2DOP reduces to the problem of solving a system of algebraic equations. The benefit of the method is that it provides greater flexibility in which the given initial and boundary conditions of the problem are imposed. Moreover, compared with the eigenfunction method, the satisfactory results are obtained only in a small number of polynomials order. This numerical approach is applicable and effective for such a kind of nonlinear 2DOP. The convergence of the method is extensively discussed and finally two illustrative examples are included to observe the validity and applicability of the new technique developed in the current work.

  9. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  10. Economic evaluation of genomic selection in small ruminants: a sheep meat breeding program.

    PubMed

    Shumbusho, F; Raoul, J; Astruc, J M; Palhiere, I; Lemarié, S; Fugeray-Scarbel, A; Elsen, J M

    2016-06-01

    Recent genomic evaluation studies using real data and predicting genetic gain by modeling breeding programs have reported moderate expected benefits from the replacement of classic selection schemes by genomic selection (GS) in small ruminants. The objectives of this study were to compare the cost, monetary genetic gain and economic efficiency of classic selection and GS schemes in the meat sheep industry. Deterministic methods were used to model selection based on multi-trait indices from a sheep meat breeding program. Decisional variables related to male selection candidates and progeny testing were optimized to maximize the annual monetary genetic gain (AMGG), that is, a weighted sum of meat and maternal traits annual genetic gains. For GS, a reference population of 2000 individuals was assumed and genomic information was available for evaluation of male candidates only. In the classic selection scheme, males breeding values were estimated from own and offspring phenotypes. In GS, different scenarios were considered, differing by the information used to select males (genomic only, genomic+own performance, genomic+offspring phenotypes). The results showed that all GS scenarios were associated with higher total variable costs than classic selection (if the cost of genotyping was 123 euros/animal). In terms of AMGG and economic returns, GS scenarios were found to be superior to classic selection only if genomic information was combined with their own meat phenotypes (GS-Pheno) or with their progeny test information. The predicted economic efficiency, defined as returns (proportional to number of expressions of AMGG in the nucleus and commercial flocks) minus total variable costs, showed that the best GS scenario (GS-Pheno) was up to 15% more efficient than classic selection. For all selection scenarios, optimization increased the overall AMGG, returns and economic efficiency. As a conclusion, our study shows that some forms of GS strategies are more advantageous than classic selection, provided that GS is already initiated (i.e. the initial reference population is available). Optimizing decisional variables of the classic selection scheme could be of greater benefit than including genomic information in optimized designs.

  11. Numerical investigation of optimal layout of rockbolts for ground structures

    NASA Astrophysics Data System (ADS)

    Kato, Junji; Ishi, Keiichiro; Terada, Kenjiro; Kyoya, Takashi

    Due to difficulty to obtain reliable ground data, layout of rockbolts is determined entirely in a classical way assuming an isotropic rock stress condition. The present study assumes anisotropic stress condition and optimizes layout of rockbolts in order to maximize the stiffness of unstable ground of tunnels and slopes by applying multiphase layout optimization. It was verified that this method has a certain possibility to improve the stiffness of unstable ground.

  12. Practical optimal flight control system design for helicopter aircraft. Volume 2: Software user's guide

    NASA Technical Reports Server (NTRS)

    Riedel, S. A.

    1979-01-01

    A method by which modern and classical control theory techniques may be integrated in a synergistic fashion and used in the design of practical flight control systems is presented. A general procedure is developed, and several illustrative examples are included. Emphasis is placed not only on the synthesis of the design, but on the assessment of the results as well. The first step is to establish the differences, distinguishing characteristics and connections between the modern and classical control theory approaches. Ultimately, this uncovers a relationship between bandwidth goals familiar in classical control and cost function weights in the equivalent optimal system. In order to obtain a practical optimal solution, it is also necessary to formulate the problem very carefully, and each choice of state, measurement and output variable must be judiciously considered. Once design goals are established and problem formulation completed, the control system is synthesized in a straightforward manner. Three steps are involved: filter-observer solution, regulator solution, and the combination of those two into the controller. Assessment of the controller permits and examination and expansion of the synthesis results.

  13. Quantum theory of multiscale coarse-graining.

    PubMed

    Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A

    2018-03-14

    Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.

  14. Designing stellarator coils by a modified Newton method using FOCUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  15. Designing stellarator coils by a modified Newton method using FOCUS

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-06-01

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  16. Designing stellarator coils by a modified Newton method using FOCUS

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...

    2018-03-22

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  17. Hybrid Quantum-Classical Approach to Quantum Optimal Control.

    PubMed

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-14

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  18. Gradient-based Optimization for Poroelastic and Viscoelastic MR Elastography

    PubMed Central

    Tan, Likun; McGarry, Matthew D.J.; Van Houten, Elijah E.W.; Ji, Ming; Solamen, Ligin; Weaver, John B.

    2017-01-01

    We describe an efficient gradient computation for solving inverse problems arising in magnetic resonance elastography (MRE). The algorithm can be considered as a generalized ‘adjoint method’ based on a Lagrangian formulation. One requirement for the classic adjoint method is assurance of the self-adjoint property of the stiffness matrix in the elasticity problem. In this paper, we show this property is no longer a necessary condition in our algorithm, but the computational performance can be as efficient as the classic method, which involves only two forward solutions and is independent of the number of parameters to be estimated. The algorithm is developed and implemented in material property reconstructions using poroelastic and viscoelastic modeling. Various gradient- and Hessian-based optimization techniques have been tested on simulation, phantom and in vivo brain data. The numerical results show the feasibility and the efficiency of the proposed scheme for gradient calculation. PMID:27608454

  19. An opinion formation based binary optimization approach for feature selection

    NASA Astrophysics Data System (ADS)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  20. Device-Independent Tests of Entropy

    NASA Astrophysics Data System (ADS)

    Chaves, Rafael; Brask, Jonatan Bohr; Brunner, Nicolas

    2015-09-01

    We show that the entropy of a message can be tested in a device-independent way. Specifically, we consider a prepare-and-measure scenario with classical or quantum communication, and develop two different methods for placing lower bounds on the communication entropy, given observable data. The first method is based on the framework of causal inference networks. The second technique, based on convex optimization, shows that quantum communication provides an advantage over classical communication, in the sense of requiring a lower entropy to reproduce given data. These ideas may serve as a basis for novel applications in device-independent quantum information processing.

  1. A framework for simultaneous aerodynamic design optimization in the presence of chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi

    Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less

  2. Quantum money with nearly optimal error tolerance

    NASA Astrophysics Data System (ADS)

    Amiri, Ryan; Arrazola, Juan Miguel

    2017-06-01

    We present a family of quantum money schemes with classical verification which display a number of benefits over previous proposals. Our schemes are based on hidden matching quantum retrieval games and they tolerate noise up to 23 % , which we conjecture reaches 25 % asymptotically as the dimension of the underlying hidden matching states is increased. Furthermore, we prove that 25 % is the maximum tolerable noise for a wide class of quantum money schemes with classical verification, meaning our schemes are almost optimally noise tolerant. We use methods in semidefinite programming to prove security in a substantially different manner to previous proposals, leading to two main advantages: first, coin verification involves only a constant number of states (with respect to coin size), thereby allowing for smaller coins; second, the reusability of coins within our scheme grows linearly with the size of the coin, which is known to be optimal. Last, we suggest methods by which the coins in our protocol could be implemented using weak coherent states and verified using existing experimental techniques, even in the presence of detector inefficiencies.

  3. A Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) Quantitative Analysis Method Based on the Auto-Selection of an Internal Reference Line and Optimized Estimation of Plasma Temperature.

    PubMed

    Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong

    2018-01-01

    The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.

  4. Texture mapping via optimal mass transport.

    PubMed

    Dominitz, Ayelet; Tannenbaum, Allen

    2010-01-01

    In this paper, we present a novel method for texture mapping of closed surfaces. Our method is based on the technique of optimal mass transport (also known as the "earth-mover's metric"). This is a classical problem that concerns determining the optimal way, in the sense of minimal transportation cost, of moving a pile of soil from one site to another. In our context, the resulting mapping is area preserving and minimizes angle distortion in the optimal mass sense. Indeed, we first begin with an angle-preserving mapping (which may greatly distort area) and then correct it using the mass transport procedure derived via a certain gradient flow. In order to obtain fast convergence to the optimal mapping, we incorporate a multiresolution scheme into our flow. We also use ideas from discrete exterior calculus in our computations.

  5. Optimization of the Bridgman crystal growth process

    NASA Astrophysics Data System (ADS)

    Margulies, M.; Witomski, P.; Duffar, T.

    2004-05-01

    A numerical optimization method of the vertical Bridgman growth configuration is presented and developed. It permits to optimize the furnace temperature field and the pulling rate versus time in order to decrease the radial thermal gradients in the sample. Some constraints are also included in order to insure physically realistic results. The model includes the two classical non-linearities associated to crystal growth processes, the radiative thermal exchange and the release of latent heat at the solid-liquid interface. The mathematical analysis and development of the problem is shortly described. On some examples, it is shown that the method works in a satisfactory way; however the results are dependent on the numerical parameters. Improvements of the optimization model, on the physical and numerical point of view, are suggested.

  6. Optimal configuration of power grid sources based on optimal particle swarm algorithm

    NASA Astrophysics Data System (ADS)

    Wen, Yuanhua

    2018-04-01

    In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.

  7. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  8. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  9. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  10. The application of quadratic optimal cooperative control synthesis to a CH-47 helicopter

    NASA Technical Reports Server (NTRS)

    Townsend, Barbara K.

    1987-01-01

    A control-system design method, quadratic optimal cooperative control synthesis (CCS), is applied to the design of a stability and control augmentation system (SCAS). The CCS design method is different from other design methods in that it does not require detailed a priori design criteria, but instead relies on an explicit optimal pilot-model to create desired performance. The design method, which was developed previously for fixed-wing aircraft, is simplified and modified for application to a Boeing CH-47 helicopter. Two SCAS designs are developed using the CCS design methodology. The resulting CCS designs are then compared with designs obtained using classical/frequency-domain methods and linear quadratic regulator (LQR) theory in a piloted fixed-base simulation. Results indicate that the CCS method, with slight modifications, can be used to produce controller designs which compare favorably with the frequency-domain approach.

  11. Assessment of Masonry Buildings Subjected to Landslide-Induced Settlements: From Load Path Method to Evolutionary Optimization Method

    NASA Astrophysics Data System (ADS)

    Palmisano, Fabrizio; Elia, Angelo

    2017-10-01

    One of the main difficulties, when dealing with landslide structural vulnerability, is the diagnosis of the causes of crack patterns. This is also due to the excessive complexity of models based on classical structural mechanics that makes them inappropriate especially when there is the necessity to perform a rapid vulnerability assessment at the territorial scale. This is why, a new approach, based on a ‘simple model’ (i.e. the Load Path Method, LPM), has been proposed by Palmisano and Elia for the interpretation of the behaviour of masonry buildings subjected to landslide-induced settlements. However, the LPM is very useful for rapidly finding the 'most plausible solution' instead of the exact solution. To find the solution, optimization algorithms are necessary. In this scenario, this article aims to show how the Bidirectional Evolutionary Structural Optimization method by Huang and Xie, can be very useful to optimize the strut-and-tie models obtained by using the Load Path Method.

  12. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  13. The optimal location of piezoelectric actuators and sensors for vibration control of plates

    NASA Astrophysics Data System (ADS)

    Kumar, K. Ramesh; Narayanan, S.

    2007-12-01

    This paper considers the optimal placement of collocated piezoelectric actuator-sensor pairs on a thin plate using a model-based linear quadratic regulator (LQR) controller. LQR performance is taken as objective for finding the optimal location of sensor-actuator pairs. The problem is formulated using the finite element method (FEM) as multi-input-multi-output (MIMO) model control. The discrete optimal sensor and actuator location problem is formulated in the framework of a zero-one optimization problem. A genetic algorithm (GA) is used to solve the zero-one optimization problem. Different classical control strategies like direct proportional feedback, constant-gain negative velocity feedback and the LQR optimal control scheme are applied to study the control effectiveness.

  14. Efficiency of quantum vs. classical annealing in nonconvex learning problems

    PubMed Central

    Zecchina, Riccardo

    2018-01-01

    Quantum annealers aim at solving nonconvex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists of designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of nonconvex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes are dominated by local minima that cause exponential slowdown of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions. PMID:29382764

  15. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  16. Derived heuristics-based consistent optimization of material flow in a gold processing plant

    NASA Astrophysics Data System (ADS)

    Myburgh, Christie; Deb, Kalyanmoy

    2018-01-01

    Material flow in a chemical processing plant often follows complicated control laws and involves plant capacity constraints. Importantly, the process involves discrete scenarios which when modelled in a programming format involves if-then-else statements. Therefore, a formulation of an optimization problem of such processes becomes complicated with nonlinear and non-differentiable objective and constraint functions. In handling such problems using classical point-based approaches, users often have to resort to modifications and indirect ways of representing the problem to suit the restrictions associated with classical methods. In a particular gold processing plant optimization problem, these facts are demonstrated by showing results from MATLAB®'s well-known fmincon routine. Thereafter, a customized evolutionary optimization procedure which is capable of handling all complexities offered by the problem is developed. Although the evolutionary approach produced results with comparatively less variance over multiple runs, the performance has been enhanced by introducing derived heuristics associated with the problem. In this article, the development and usage of derived heuristics in a practical problem are presented and their importance in a quick convergence of the overall algorithm is demonstrated.

  17. Modern Optimization Methods in Minimum Weight Design of Elastic Annular Rotating Disk with Variable Thickness

    NASA Astrophysics Data System (ADS)

    Jafari, S.; Hojjati, M. H.

    2011-12-01

    Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk thickness profile for minimum weight design using the simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. In using semi-analytical the radial domain of the disk is divided into some virtual sub-domains as rings where the weight of each rings must be minimized. Inequality constrain equation used in optimization is to make sure that maximum von Mises stress is always less than yielding strength of the material of the disk and rotating disk does not fail. The results show that the minimum weight obtained for all two methods is almost identical. The PSO method gives a profile with slightly less weight (6.9% less than SA) while the implementation of both PSO and SA methods are easy and provide more flexibility compared with classical methods.

  18. Selection of Hidden Layer Neurons and Best Training Method for FFNN in Application of Long Term Load Forecasting

    NASA Astrophysics Data System (ADS)

    Singh, Navneet K.; Singh, Asheesh K.; Tripathy, Manoj

    2012-05-01

    For power industries electricity load forecast plays an important role for real-time control, security, optimal unit commitment, economic scheduling, maintenance, energy management, and plant structure planning etc. A new technique for long term load forecasting (LTLF) using optimized feed forward artificial neural network (FFNN) architecture is presented in this paper, which selects optimal number of neurons in the hidden layer as well as the best training method for the case study. The prediction performance of proposed technique is evaluated using mean absolute percentage error (MAPE) of Thailand private electricity consumption and forecasted data. The results obtained are compared with the results of classical auto-regressive (AR) and moving average (MA) methods. It is, in general, observed that the proposed method is prediction wise more accurate.

  19. Optimization in First Semester Calculus: A Look at a Classic Problem

    ERIC Educational Resources Information Center

    LaRue, Renee; Infante, Nicole Engelke

    2015-01-01

    Optimization problems in first semester calculus have historically been a challenge for students. Focusing on the classic optimization problem of finding the minimum amount of fencing required to enclose a fixed area, we examine students' activity through the lens of Tall and Vinner's concept image and Carlson and Bloom's multidimensional…

  20. Genetic Algorithm for Initial Orbit Determination with Too Short Arc

    NASA Astrophysics Data System (ADS)

    Li, Xin-ran; Wang, Xin

    2017-01-01

    A huge quantity of too-short-arc (TSA) observational data have been obtained in sky surveys of space objects. However, reasonable results for the TSAs can hardly be obtained with the classical methods of initial orbit determination (IOD). In this paper, the IOD is reduced to a two-stage hierarchical optimization problem containing three variables for each stage. Using the genetic algorithm, a new method of the IOD for TSAs is established, through the selections of the optimized variables and the corresponding genetic operators for specific problems. Numerical experiments based on the real measurements show that the method can provide valid initial values for the follow-up work.

  1. Genetic Algorithm for Initial Orbit Determination with Too Short Arc

    NASA Astrophysics Data System (ADS)

    Li, X. R.; Wang, X.

    2016-01-01

    The sky surveys of space objects have obtained a huge quantity of too-short-arc (TSA) observation data. However, the classical method of initial orbit determination (IOD) can hardly get reasonable results for the TSAs. The IOD is reduced to a two-stage hierarchical optimization problem containing three variables for each stage. Using the genetic algorithm, a new method of the IOD for TSAs is established, through the selection of optimizing variables as well as the corresponding genetic operator for specific problems. Numerical experiments based on the real measurements show that the method can provide valid initial values for the follow-up work.

  2. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    NASA Astrophysics Data System (ADS)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  3. The Near-infrared Optimal Distances Method Applied to Galactic Classical Cepheids Tightly Constrains Mid-infrared Period–Luminosity Relations

    NASA Astrophysics Data System (ADS)

    Wang, Shu; Chen, Xiaodian; de Grijs, Richard; Deng, Licai

    2018-01-01

    Classical Cepheids are well-known and widely used distance indicators. As distance and extinction are usually degenerate, it is important to develop suitable methods to robustly anchor the distance scale. Here, we introduce a near-infrared optimal distance method to determine both the extinction values of and distances to a large sample of 288 Galactic classical Cepheids. The overall uncertainty in the derived distances is less than 4.9%. We compare our newly determined distances to the Cepheids in our sample with previously published distances to the same Cepheids with Hubble Space Telescope parallax measurements and distances based on the IR surface brightness method, Wesenheit functions, and the main-sequence fitting method. The systematic deviations in the distances determined here with respect to those of previous publications is less than 1%–2%. Hence, we constructed Galactic mid-IR period–luminosity (PL) relations for classical Cepheids in the four Wide-Field Infrared Survey Explorer (WISE) bands (W1, W2, W3, and W4) and the four Spitzer Space Telescope bands ([3.6], [4.5], [5.8], and [8.0]). Based on our sample of hundreds of Cepheids, the WISE PL relations have been determined for the first time; their dispersion is approximately 0.10 mag. Using the currently most complete sample, our Spitzer PL relations represent a significant improvement in accuracy, especially in the [3.6] band which has the smallest dispersion (0.066 mag). In addition, the average mid-IR extinction curve for Cepheids has been obtained: {A}W1/{A}{K{{s}}}≈ 0.560, {A}W2/{A}{K{{s}}}≈ 0.479, {A}W3/{A}{K{{s}}}≈ 0.507, {A}W4/{A}{K{{s}}}≈ 0.406, {A}[3.6]/{A}{K{{s}}}≈ 0.481, {A}[4.5]/{A}{K{{s}}}≈ 0.469, {A}[5.8]/{A}{K{{s}}}≈ 0.427, and {A}[8.0]/{A}{K{{s}}}≈ 0.427 {mag}.

  4. Economical Unsteady High-Fidelity Aerodynamics for Structural Optimization with a Flutter Constraint

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Stanford, Bret K.

    2017-01-01

    Structural optimization with a flutter constraint for a vehicle designed to fly in the transonic regime is a particularly difficult task. In this speed range, the flutter boundary is very sensitive to aerodynamic nonlinearities, typically requiring high-fidelity Navier-Stokes simulations. However, the repeated application of unsteady computational fluid dynamics to guide an aeroelastic optimization process is very computationally expensive. This expense has motivated the development of methods that incorporate aspects of the aerodynamic nonlinearity, classical tools of flutter analysis, and more recent methods of optimization. While it is possible to use doublet lattice method aerodynamics, this paper focuses on the use of an unsteady high-fidelity aerodynamic reduced order model combined with successive transformations that allows for an economical way of utilizing high-fidelity aerodynamics in the optimization process. This approach is applied to the common research model wing structural design. As might be expected, the high-fidelity aerodynamics produces a heavier wing than that optimized with doublet lattice aerodynamics. It is found that the optimized lower skin of the wing using high-fidelity aerodynamics differs significantly from that using doublet lattice aerodynamics.

  5. The application of quadratic optimal cooperative control synthesis to a CH-47 helicopter

    NASA Technical Reports Server (NTRS)

    Townsend, Barbara K.

    1986-01-01

    A control-system design method, Quadratic Optimal Cooperative Control Synthesis (CCS), is applied to the design of a Stability and Control Augmentation Systems (SCAS). The CCS design method is different from other design methods in that it does not require detailed a priori design criteria, but instead relies on an explicit optimal pilot-model to create desired performance. The design model, which was developed previously for fixed-wing aircraft, is simplified and modified for application to a Boeing Vertol CH-47 helicopter. Two SCAS designs are developed using the CCS design methodology. The resulting CCS designs are then compared with designs obtained using classical/frequency-domain methods and Linear Quadratic Regulator (LQR) theory in a piloted fixed-base simulation. Results indicate that the CCS method, with slight modifications, can be used to produce controller designs which compare favorably with the frequency-domain approach.

  6. Comparative spectral analysis of veterinary powder product by continuous wavelet and derivative transforms

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru

    2007-10-01

    Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.

  7. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands

    PubMed Central

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140

  8. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands.

    PubMed

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.

  9. Simultaneous quantitative analysis of olmesartan, amlodipine and hydrochlorothiazide in their combined dosage form utilizing classical and alternating least squares based chemometric methods.

    PubMed

    Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S

    2016-03-01

    Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.

  10. Combined Optimal Control System for excavator electric drive

    NASA Astrophysics Data System (ADS)

    Kurochkin, N. S.; Kochetkov, V. P.; Platonova, E. V.; Glushkin, E. Y.; Dulesov, A. S.

    2018-03-01

    The article presents a synthesis of the combined optimal control algorithms of the AC drive rotation mechanism of the excavator. Synthesis of algorithms consists in the regulation of external coordinates - based on the theory of optimal systems and correction of the internal coordinates electric drive using the method "technical optimum". The research shows the advantage of optimal combined control systems for the electric rotary drive over classical systems of subordinate regulation. The paper presents a method for selecting the optimality criterion of coefficients to find the intersection of the range of permissible values of the coordinates of the control object. There is possibility of system settings by choosing the optimality criterion coefficients, which allows one to select the required characteristics of the drive: the dynamic moment (M) and the time of the transient process (tpp). Due to the use of combined optimal control systems, it was possible to significantly reduce the maximum value of the dynamic moment (M) and at the same time - reduce the transient time (tpp).

  11. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  12. Airfoil optimization by the one-shot method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1994-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  13. Aerodynamic design and optimization in one shot

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.

    1992-01-01

    This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.

  14. Constrained Optimal Transport

    NASA Astrophysics Data System (ADS)

    Ekren, Ibrahim; Soner, H. Mete

    2018-03-01

    The classical duality theory of Kantorovich (C R (Doklady) Acad Sci URSS (NS) 37:199-201, 1942) and Kellerer (Z Wahrsch Verw Gebiete 67(4):399-432, 1984) for classical optimal transport is generalized to an abstract framework and a characterization of the dual elements is provided. This abstract generalization is set in a Banach lattice X with an order unit. The problem is given as the supremum over a convex subset of the positive unit sphere of the topological dual of X and the dual problem is defined on the bi-dual of X. These results are then applied to several extensions of the classical optimal transport.

  15. Selecting Items for Criterion-Referenced Tests.

    ERIC Educational Resources Information Center

    Mellenbergh, Gideon J.; van der Linden, Wim J.

    1982-01-01

    Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)

  16. Evaluation of the light scattering and the turbidity microtiter plate-based methods for the detection of the excipient-mediated drug precipitation inhibition.

    PubMed

    Petruševska, Marija; Urleb, Uroš; Peternel, Luka

    2013-11-01

    The excipient-mediated precipitation inhibition is classically determined by the quantification of the dissolved compound in the solution. In this study, two alternative approaches were evaluated, one is the light scattering (nephelometer) and other is the turbidity (plate reader) microtiter plate-based methods which are based on the quantification of the compound precipitate. Following the optimization of the nephelometer settings (beam focus, laser gain) and the experimental conditions, the screening of 23 excipients on the precipitation inhibition of poorly soluble fenofibrate and dipyridamole was performed. The light scattering method resulted in excellent correlation (r>0.91) between the calculated precipitation inhibitor parameters (PIPs) and the precipitation inhibition index (PI(classical)) obtained by the classical approach for fenofibrate and dipyridamole. Among the evaluated PIPs AUC100 (nephelometer) resulted in only four false positives and lack of false negatives. In the case of the turbidity-based method a good correlation of the PI(classical) was obtained for the PIP maximal optical density (OD(max), r=0.91), however, only for fenofibrate. In the case of the OD(max) (plate reader) five false positives and two false negatives were identified. In conclusion, the light scattering-based method outperformed the turbidity-based one and could be reliably used for identification of novel precipitation inhibitors. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Novel hyperspectral prediction method and apparatus

    NASA Astrophysics Data System (ADS)

    Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf

    2009-05-01

    Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.

  18. Microseed matrix screening for optimization in protein crystallization: what have we learned?

    PubMed

    D'Arcy, Allan; Bergfors, Terese; Cowan-Jacob, Sandra W; Marsh, May

    2014-09-01

    Protein crystals obtained in initial screens typically require optimization before they are of X-ray diffraction quality. Seeding is one such optimization method. In classical seeding experiments, the seed crystals are put into new, albeit similar, conditions. The past decade has seen the emergence of an alternative seeding strategy: microseed matrix screening (MMS). In this strategy, the seed crystals are transferred into conditions unrelated to the seed source. Examples of MMS applications from in-house projects and the literature include the generation of multiple crystal forms and different space groups, better diffracting crystals and crystallization of previously uncrystallizable targets. MMS can be implemented robotically, making it a viable option for drug-discovery programs. In conclusion, MMS is a simple, time- and cost-efficient optimization method that is applicable to many recalcitrant crystallization problems.

  19. Microseed matrix screening for optimization in protein crystallization: what have we learned?

    PubMed Central

    D’Arcy, Allan; Bergfors, Terese; Cowan-Jacob, Sandra W.; Marsh, May

    2014-01-01

    Protein crystals obtained in initial screens typically require optimization before they are of X-ray diffraction quality. Seeding is one such optimization method. In classical seeding experiments, the seed crystals are put into new, albeit similar, conditions. The past decade has seen the emergence of an alternative seeding strategy: microseed matrix screening (MMS). In this strategy, the seed crystals are transferred into conditions unrelated to the seed source. Examples of MMS applications from in-house projects and the literature include the generation of multiple crystal forms and different space groups, better diffracting crystals and crystallization of previously uncrystallizable targets. MMS can be implemented robotically, making it a viable option for drug-discovery programs. In conclusion, MMS is a simple, time- and cost-efficient optimization method that is applicable to many recalcitrant crystallization problems. PMID:25195878

  20. An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

    DTIC Science & Technology

    2012-08-17

    the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth...method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chie y but not...significantly outperforming several state-of-the-art solvers on most tested problems. The resulting MATLAB solver, called TVAL3, has been posted online [23]. 2

  1. Artificial Intelligence Methods in Pursuit Evasion Differential Games

    DTIC Science & Technology

    1990-07-30

    objectives, sometimes with fuzzy ones. Classical optimization, control or game theoretic methods are insufficient for their resolution. I Solution...OVERALL SATISFACTION WITH SCHOOL 120 FIGURE 5.13 EXAMPLE AHP HIERARCHY FOR CHOOSING MOST APPROPRIATE DIFFERENTIAL GAME AND PARAMETRIZATION 125 FIGURE 5.14...the Analytical Hierarchy Process originated by T.L. Saaty of the Wharton School. The Analytic Hierarchy Process ( AHP ) is a general theory of

  2. Approximation of Nash equilibria and the network community structure detection problem

    PubMed Central

    2017-01-01

    Game theory based methods designed to solve the problem of community structure detection in complex networks have emerged in recent years as an alternative to classical and optimization based approaches. The Mixed Nash Extremal Optimization uses a generative relation for the characterization of Nash equilibria to identify the community structure of a network by converting the problem into a non-cooperative game. This paper proposes a method to enhance this algorithm by reducing the number of payoff function evaluations. Numerical experiments performed on synthetic and real-world networks show that this approach is efficient, with results better or just as good as other state-of-the-art methods. PMID:28467496

  3. A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; Fang, Zhichao

    2014-01-01

    We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153

  4. Multiobjective optimization in a pseudometric objective space as applied to a general model of business activities

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2016-09-01

    It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.

  5. The q-G method : A q-version of the Steepest Descent method for global optimization.

    PubMed

    Soterroni, Aline C; Galski, Roberto L; Scarabello, Marluce C; Ramos, Fernando M

    2015-01-01

    In this work, the q-Gradient (q-G) method, a q-version of the Steepest Descent method, is presented. The main idea behind the q-G method is the use of the negative of the q-gradient vector of the objective function as the search direction. The q-gradient vector, or simply the q-gradient, is a generalization of the classical gradient vector based on the concept of Jackson's derivative from the q-calculus. Its use provides the algorithm an effective mechanism for escaping from local minima. The q-G method reduces to the Steepest Descent method when the parameter q tends to 1. The algorithm has three free parameters and it is implemented so that the search process gradually shifts from global exploration in the beginning to local exploitation in the end. We evaluated the q-G method on 34 test functions, and compared its performance with 34 optimization algorithms, including derivative-free algorithms and the Steepest Descent method. Our results show that the q-G method is competitive and has a great potential for solving multimodal optimization problems.

  6. Extremality of Gaussian quantum states.

    PubMed

    Wolf, Michael M; Giedke, Geza; Cirac, J Ignacio

    2006-03-03

    We investigate Gaussian quantum states in view of their exceptional role within the space of all continuous variables states. A general method for deriving extremality results is provided and applied to entanglement measures, secret key distillation and the classical capacity of bosonic quantum channels. We prove that for every given covariance matrix the distillable secret key rate and the entanglement, if measured appropriately, are minimized by Gaussian states. This result leads to a clearer picture of the validity of frequently made Gaussian approximations. Moreover, it implies that Gaussian encodings are optimal for the transmission of classical information through bosonic channels, if the capacity is additive.

  7. Experimental demonstration of a quantum annealing algorithm for the traveling salesman problem in a nuclear-magnetic-resonance quantum simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Hongwei; High Magnetic Field Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031; Kong Xi

    The method of quantum annealing (QA) is a promising way for solving many optimization problems in both classical and quantum information theory. The main advantage of this approach, compared with the gate model, is the robustness of the operations against errors originated from both external controls and the environment. In this work, we succeed in demonstrating experimentally an application of the method of QA to a simplified version of the traveling salesman problem by simulating the corresponding Schroedinger evolution with a NMR quantum simulator. The experimental results unambiguously yielded the optimal traveling route, in good agreement with the theoretical prediction.

  8. Stable sequential Kuhn-Tucker theorem in iterative form or a regularized Uzawa algorithm in a regular nonlinear programming problem

    NASA Astrophysics Data System (ADS)

    Sumin, M. I.

    2015-06-01

    A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.

  9. A Frequency-Domain Adaptive Matched Filter for Active Sonar Detection.

    PubMed

    Zhao, Zhishan; Zhao, Anbang; Hui, Juan; Hou, Baochun; Sotudeh, Reza; Niu, Fang

    2017-07-04

    The most classical detector of active sonar and radar is the matched filter (MF), which is the optimal processor under ideal conditions. Aiming at the problem of active sonar detection, we propose a frequency-domain adaptive matched filter (FDAMF) with the use of a frequency-domain adaptive line enhancer (ALE). The FDAMF is an improved MF. In the simulations in this paper, the signal to noise ratio (SNR) gain of the FDAMF is about 18.6 dB higher than that of the classical MF when the input SNR is -10 dB. In order to improve the performance of the FDAMF with a low input SNR, we propose a pre-processing method, which is called frequency-domain time reversal convolution and interference suppression (TRC-IS). Compared with the classical MF, the FDAMF combined with the TRC-IS method obtains higher SNR gain, a lower detection threshold, and a better receiver operating characteristic (ROC) in the simulations in this paper. The simulation results show that the FDAMF has higher processing gain and better detection performance than the classical MF under ideal conditions. The experimental results indicate that the FDAMF does improve the performance of the MF, and can adapt to actual interference in a way. In addition, the TRC-IS preprocessing method works well in an actual noisy ocean environment.

  10. Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants.

    PubMed

    Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna

    2016-06-27

    This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated.

  11. Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants

    PubMed Central

    Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna

    2016-01-01

    This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated. PMID:27355949

  12. Spin Glass Patch Planting

    NASA Technical Reports Server (NTRS)

    Wang, Wenlong; Mandra, Salvatore; Katzgraber, Helmut G.

    2016-01-01

    In this paper, we propose a patch planting method for creating arbitrarily large spin glass instances with known ground states. The scaling of the computational complexity of these instances with various block numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and the quantum annealing DW2X machine. The method can be useful for benchmarking tests for future generation quantum annealing machines, classical and quantum mechanical optimization algorithms.

  13. Optimal trajectories for aeroassisted orbital transfer

    NASA Technical Reports Server (NTRS)

    Miele, A.; Venkataraman, P.

    1983-01-01

    Consideration is given to classical and minimax problems involved in aeroassisted transfer from high earth orbit (HEO) to low earth orbit (LEO). The transfer is restricted to coplanar operation, with trajectory control effected by means of lift modulation. The performance of the maneuver is indexed to the energy expenditure or, alternatively, the time integral of the heating rate. Firist-order optimality conditions are defined for the classical approach, as are a sequential gradient-restoration algorithm and a combined gradient-restoration algorithm. Minimization techniques are presented for the aeroassisted transfer energy consumption and time-delay integral of the heating rate, as well as minimization of the pressure. It is shown that the eigenvalues of the Jacobian matrix of the differential system is both stiff and unstable, implying that the sequential gradient restoration algorithm in its present version is unsuitable. A new method, involving a multipoint approach to the two-poing boundary value problem, is recommended.

  14. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  15. Optimal harvesting policy of a stochastic two-species competitive model with Lévy noise in a polluted environment

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Yuan, Sanling

    2017-07-01

    As well known that the sudden environmental shocks and toxicant can affect the population dynamics of fish species, a mechanistic understanding of how sudden environmental change and toxicant influence the optimal harvesting policy requires development. This paper presents the optimal harvesting of a stochastic two-species competitive model with Lévy noise in a polluted environment, where the Lévy noise is used to describe the sudden climate change. Due to the discontinuity of the Lévy noise, the classical optimal harvesting methods based on the explicit solution of the corresponding Fokker-Planck equation are invalid. The object of this paper is to fill up this gap and establish the optimal harvesting policy. By using of aggregation and ergodic methods, the approximation of the optimal harvesting effort and maximum expectation of sustainable yields are obtained. Numerical simulations are carried out to support these theoretical results. Our analysis shows that the Lévy noise and the mean stress measure of toxicant in organism may affect the optimal harvesting policy significantly.

  16. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  17. Least-squares dual characterization for ROI assessment in emission tomography

    NASA Astrophysics Data System (ADS)

    Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

    2013-06-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

  18. An efficient interior-point algorithm with new non-monotone line search filter method for nonlinear constrained programming

    NASA Astrophysics Data System (ADS)

    Wang, Liwei; Liu, Xinggao; Zhang, Zeyin

    2017-02-01

    An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.

  19. An adaptive finite element method for the inequality-constrained Reynolds equation

    NASA Astrophysics Data System (ADS)

    Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha

    2018-07-01

    We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.

  20. On optimal improvements of classical iterative schemes for Z-matrices

    NASA Astrophysics Data System (ADS)

    Noutsos, D.; Tzoumas, M.

    2006-04-01

    Many researchers have considered preconditioners, applied to linear systems, whose matrix coefficient is a Z- or an M-matrix, that make the associated Jacobi and Gauss-Seidel methods converge asymptotically faster than the unpreconditioned ones. Such preconditioners are chosen so that they eliminate the off-diagonal elements of the same column or the elements of the first upper diagonal [Milaszewicz, LAA 93 (1987) 161-170], Gunawardena et al. [LAA 154-156 (1991) 123-143]. In this work we generalize the previous preconditioners to obtain optimal methods. "Good" Jacobi and Gauss-Seidel algorithms are given and preconditioners, that eliminate more than one entry per row, are also proposed and analyzed. Moreover, the behavior of the above preconditioners to the Krylov subspace methods is studied.

  1. Fast principal component analysis for stacking seismic data

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  2. Optimal solar sail planetocentric trajectories

    NASA Technical Reports Server (NTRS)

    Sackett, L. L.

    1977-01-01

    The analysis of solar sail planetocentric optimal trajectory problem is described. A computer program was produced to calculate optimal trajectories for a limited performance analysis. A square sail model is included and some consideration is given to a heliogyro sail model. Orbit to a subescape point and orbit to orbit transfer are considered. Trajectories about the four inner planets can be calculated and shadowing, oblateness, and solar motion may be included. Equinoctial orbital elements are used to avoid the classical singularities, and the method of averaging is applied to increase computational speed. Solution of the two-point boundary value problem which arises from the application of optimization theory is accomplished with a Newton procedure. Time optimal trajectories are emphasized, but a penalty function has been considered to prevent trajectories which intersect a planet's surface.

  3. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  4. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  5. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm.

    PubMed

    Ghafouri, H R; Mosharaf-Dehkordi, M; Afzalan, B

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    PubMed

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  7. The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems

    PubMed Central

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106

  8. Rapid determination of polycyclic aromatic hydrocarbons in rainwater by liquid-liquid microextraction and LC with core-shell particles column and fluorescence detection.

    PubMed

    Vinci, Giuliana; Antonelli, Marta L; Preti, Raffaella

    2013-02-01

    Liquid-liquid microextraction coupled to LC with fluorescence detection for the determination of Environmental Protection Agency's 16 priority pollutant polycyclic aromatic hydrocarbons in rainwater has been developed. The optimization of the extraction method has involved several parameters, including the comparison between an ultrasonic bath and a magnetic stirrer as extractant apparatus, the choice of the extractant solvent, and the optimization of the extraction time. Liquid-liquid microextraction gave good results in terms of recoveries (from 73.6 to 102.8% in rainwater) and repeatability, with a very simple procedure and low solvent consumption. The reported chromatographic method uses a Core-Shell technology column, with particle size <3 μm instead of classical 5-μm particles column. The resulting backpressure was below 300 bar, allowing the use of a conventional HPLC system rather than the more expensive ultrahigh performance LC (UHPLC). An average decrease of 59% in run time and 75% in eluent consumption has been obtained, compared to classical HPLC methods, keeping good separation, sensitivity, and repeatability. The proposed conditions were successfully applied to the determinations of polycyclic aromatic hydrocarbons in genuine rainwater samples. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Optimal Output of Distributed Generation Based On Complex Power Increment

    NASA Astrophysics Data System (ADS)

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  10. Simultaneous optimization of loading pattern and burnable poison placement for PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alim, F.; Ivanov, K.; Yilmaz, S.

    2006-07-01

    To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less

  11. String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images

    NASA Astrophysics Data System (ADS)

    Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo

    2016-11-01

    We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.

  12. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  13. Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.

    PubMed

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang

    2010-07-01

    Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.

  14. Trajectory Control and Optimization for Responsive Spacecraft

    DTIC Science & Technology

    2012-03-22

    Orbital Elements and Local-Vertical-Local-Horizontal Frame 10 2.3 Equinoctial Frame with respect to ECI Frame [17] . . . . . . . . . 14 3.1...position and velocity, classical orbital elements , and equinoctial elements . These methods are detailed in the following sections. 2.1.1 Inertial Position...trajectory. However, if the singularities are unavoidable equinoctial orbital elements could be used. 2.1.3 Equinoctial Elements . Equinoctial

  15. Development of an optimized protocol for the detection of classical swine fever virus in formalin-fixed, paraffin-embedded tissues by seminested reverse transcription-polymerase chain reaction and comparison with in situ hybridization.

    PubMed

    Ha, S-K; Choi, C; Chae, C

    2004-10-01

    An optimized protocol was developed for the detection of classical swine fever virus (CSFV) in formalin-fixed, paraffin-embedded tissues obtained from experimentally and naturally infected pigs by seminested reverse transcription-polymerase chain reaction (RT-PCR). The results for seminested RT-PCR were compared with those determined by in situ hybridization. The results obtained show that the use of deparaffinization with xylene, digestion with proteinase K, extraction with Trizol LS, followed by seminested RT-PCR is a reliable detection method. An increase in sensitivity was observed as amplicon size decreased. The highest sensitivity for RT-PCR on formalin-fixed, paraffin-embedded tissues RNA was obtained with amplicon sizes less than approximately 200 base pairs. An hybridization signal for CSFV was detected in lymph nodes from 12 experimentally and 12 naturally infected pigs. The sensitivity of seminested RT-PCR compared with in situ hybridization was 100% for CSFV. When only formalin-fixed tissues are available, seminested RT-PCR and in situ hybridization would be useful diagnostic methods for the detection of CSFV nucleic acid.

  16. Multiple Active Contours Guided by Differential Evolution for Medical Image Segmentation

    PubMed Central

    Cruz-Aceves, I.; Avina-Cervantes, J. G.; Lopez-Hernandez, J. M.; Rostro-Gonzalez, H.; Garcia-Capulin, C. H.; Torres-Cisneros, M.; Guzman-Cabrera, R.

    2013-01-01

    This paper presents a new image segmentation method based on multiple active contours guided by differential evolution, called MACDE. The segmentation method uses differential evolution over a polar coordinate system to increase the exploration and exploitation capabilities regarding the classical active contour model. To evaluate the performance of the proposed method, a set of synthetic images with complex objects, Gaussian noise, and deep concavities is introduced. Subsequently, MACDE is applied on datasets of sequential computed tomography and magnetic resonance images which contain the human heart and the human left ventricle, respectively. Finally, to obtain a quantitative and qualitative evaluation of the medical image segmentations compared to regions outlined by experts, a set of distance and similarity metrics has been adopted. According to the experimental results, MACDE outperforms the classical active contour model and the interactive Tseng method in terms of efficiency and robustness for obtaining the optimal control points and attains a high accuracy segmentation. PMID:23983809

  17. The ReaxFF reactive force-field: Development, applications, and future directions

    DOE PAGES

    Senftle, Thomas; Hong, Sungwook; Islam, Md Mahbubul; ...

    2016-03-04

    The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFFmore » method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. As a result, this article provides an overview of the development, application, and future directions of the ReaxFF method.« less

  18. Shell-vial culture and real-time PCR applied to Rickettsia typhi and Rickettsia felis detection.

    PubMed

    Segura, Ferran; Pons, Immaculada; Pla, Júlia; Nogueras, María-Mercedes

    2015-11-01

    Murine typhus is a zoonosis transmitted by fleas, whose etiological agent is Rickettsia typhi. Rickettsia felis infection can produces similar symptoms. Both are intracellular microorganisms. Therefore, their diagnosis is difficult and their infections can be misdiagnosed. Early diagnosis prevents severity and inappropriate treatment regimens. Serology can't be applied during the early stages of infection because it requires seroconversion. Shell-vial (SV) culture assay is a powerful tool to detect Rickettsia. The aim of the study was to optimize SV using a real-time PCR as monitoring method. Moreover, the study analyzes which antibiotics are useful to isolate these microorganisms from fleas avoiding contamination by other bacteria. For the first purpose, SVs were inoculated with each microorganism. They were incubated at different temperatures and monitored by real-time PCR and classical methods (Gimenez staining and indirect immunofluorescence assay). R. typhi grew at all temperatures. R. felis grew at 28 and 32 °C. Real-time PCR was more sensitive than classical methods and it detected microorganisms much earlier. Besides, the assay sensitivity was improved by increasing the number of SV. For the second purpose, microorganisms and fleas were incubated and monitored in different concentrations of antibiotics. Gentamicin, sufamethoxazole, trimethoprim were useful for R. typhi isolation. Gentamicin, streptomycin, penicillin, and amphotericin B were useful for R. felis isolation. Finally, the optimized conditions were used to isolate R. felis from fleas collected at a veterinary clinic. R. felis was isolated at 28 and 32 °C. However, successful establishment of cultures were not possible probably due to sub-optimal conditions of samples.

  19. Design of high-strength refractory complex solid-solution alloys

    DOE PAGES

    Singh, Prashant; Sharma, Aayush; Smirnov, A. V.; ...

    2018-03-28

    Nickel-based superalloys and near-equiatomic high-entropy alloys containing molybdenum are known for higher temperature strength and corrosion resistance. Yet, complex solid-solution alloys offer a huge design space to tune for optimal properties at slightly reduced entropy. For refractory Mo-W-Ta-Ti-Zr, we showcase KKR electronic structure methods via the coherent-potential approximation to identify alloys over five-dimensional design space with improved mechanical properties and necessary global (formation enthalpy) and local (short-range order) stability. Deformation is modeled with classical molecular dynamic simulations, validated from our first-principle data. We predict complex solid-solution alloys of improved stability with greatly enhanced modulus of elasticity (3× at 300 K)more » over near-equiatomic cases, as validated experimentally, and with higher moduli above 500 K over commercial alloys (2.3× at 2000 K). We also show that optimal complex solid-solution alloys are not described well by classical potentials due to critical electronic effects.« less

  20. Design of high-strength refractory complex solid-solution alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Prashant; Sharma, Aayush; Smirnov, A. V.

    Nickel-based superalloys and near-equiatomic high-entropy alloys containing molybdenum are known for higher temperature strength and corrosion resistance. Yet, complex solid-solution alloys offer a huge design space to tune for optimal properties at slightly reduced entropy. For refractory Mo-W-Ta-Ti-Zr, we showcase KKR electronic structure methods via the coherent-potential approximation to identify alloys over five-dimensional design space with improved mechanical properties and necessary global (formation enthalpy) and local (short-range order) stability. Deformation is modeled with classical molecular dynamic simulations, validated from our first-principle data. We predict complex solid-solution alloys of improved stability with greatly enhanced modulus of elasticity (3× at 300 K)more » over near-equiatomic cases, as validated experimentally, and with higher moduli above 500 K over commercial alloys (2.3× at 2000 K). We also show that optimal complex solid-solution alloys are not described well by classical potentials due to critical electronic effects.« less

  1. Expectation maximization for hard X-ray count modulation profiles

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.

    2013-07-01

    Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.

  2. Optimal control of a coupled partial and ordinary differential equations system for the assimilation of polarimetry Stokes vector measurements in tokamak free-boundary equilibrium reconstruction with application to ITER

    NASA Astrophysics Data System (ADS)

    Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric

    2017-08-01

    The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.

  3. Numerical Simulation of the Francis Turbine and CAD used to Optimized the Runner Design (2nd).

    NASA Astrophysics Data System (ADS)

    Sutikno, Priyono

    2010-06-01

    Hydro Power is the most important renewable energy source on earth. The water is free of charge and with the generation of electric energy in a Hydroelectric Power station the production of green house gases (mainly CO2) is negligible. Hydro Power Generation Stations are long term installations and can be used for 50 years and more, care must be taken to guarantee a smooth and safe operation over the years. Maintenance is necessary and critical parts of the machines have to be replaced if necessary. Within modern engineering the numerical flow simulation plays an important role in order to optimize the hydraulic turbine in conjunction with connected components of the plant. Especially for rehabilitation and upgrading existing Power Plants important point of concern are to predict the power output of turbine, to achieve maximum hydraulic efficiency, to avoid or to minimize cavitations, to avoid or to minimized vibrations in whole range operation. Flow simulation can help to solve operational problems and to optimize the turbo machinery for hydro electric generating stations or their component through, intuitive optimization, mathematical optimization, parametric design, the reduction of cavitations through design, prediction of draft tube vortex, trouble shooting by using the simulation. The classic design through graphic-analytical method is cumbersome and can't give in evidence the positive or negative aspects of the designing options. So it was obvious to have imposed as necessity the classical design methods to an adequate design method using the CAD software. There are many option chose during design calculus in a specific step of designing may be verified in ensemble and detail form a point of view. The final graphic post processing would be realized only for the optimal solution, through a 3 D representation of the runner as a whole for the final approval geometric shape. In this article it was investigated the redesign of the hydraulic turbine's runner, medium head Francis type, with following value for the most important parameter, the rated specific speed ns.

  4. Laplace-Fourier-domain dispersion analysis of an average derivative optimal scheme for scalar-wave equation

    NASA Astrophysics Data System (ADS)

    Chen, Jing-Bo

    2014-06-01

    By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.

  5. Quantum algorithm for energy matching in hard optimization problems

    NASA Astrophysics Data System (ADS)

    Baldwin, C. L.; Laumann, C. R.

    2018-06-01

    We consider the ability of local quantum dynamics to solve the "energy-matching" problem: given an instance of a classical optimization problem and a low-energy state, find another macroscopically distinct low-energy state. Energy matching is difficult in rugged optimization landscapes, as the given state provides little information about the distant topography. Here, we show that the introduction of quantum dynamics can provide a speedup over classical algorithms in a large class of hard optimization problems. Tunneling allows the system to explore the optimization landscape while approximately conserving the classical energy, even in the presence of large barriers. Specifically, we study energy matching in the random p -spin model of spin-glass theory. Using perturbation theory and exact diagonalization, we show that introducing a transverse field leads to three sharp dynamical phases, only one of which solves the matching problem: (1) a small-field "trapped" phase, in which tunneling is too weak for the system to escape the vicinity of the initial state; (2) a large-field "excited" phase, in which the field excites the system into high-energy states, effectively forgetting the initial energy; and (3) the intermediate "tunneling" phase, in which the system succeeds at energy matching. The rate at which distant states are found in the tunneling phase, although exponentially slow in system size, is exponentially faster than classical search algorithms.

  6. Markerless human motion tracking using hierarchical multi-swarm cooperative particle swarm optimization.

    PubMed

    Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah

    2015-01-01

    The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches-Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims.

  7. SU-D-BRB-05: Quantum Learning for Knowledge-Based Response-Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Naqa, I; Ten, R

    Purpose: There is tremendous excitement in radiotherapy about applying data-driven methods to develop personalized clinical decisions for real-time response-based adaptation. However, classical statistical learning methods lack in terms of efficiency and ability to predict outcomes under conditions of uncertainty and incomplete information. Therefore, we are investigating physics-inspired machine learning approaches by utilizing quantum principles for developing a robust framework to dynamically adapt treatments to individual patient’s characteristics and optimize outcomes. Methods: We studied 88 liver SBRT patients with 35 on non-adaptive and 53 on adaptive protocols. Adaptation was based on liver function using a split-course of 3+2 fractions with amore » month break. The radiotherapy environment was modeled as a Markov decision process (MDP) of baseline and one month into treatment states. The patient environment was modeled by a 5-variable state represented by patient’s clinical and dosimetric covariates. For comparison of classical and quantum learning methods, decision-making to adapt at one month was considered. The MDP objective was defined by the complication-free tumor control (P{sup +}=TCPx(1-NTCP)). A simple regression model represented state-action mapping. Single bit in classical MDP and a qubit of 2-superimposed states in quantum MDP represented the decision actions. Classical decision selection was done using reinforcement Q-learning and quantum searching was performed using Grover’s algorithm, which applies uniform superposition over possible states and yields quadratic speed-up. Results: Classical/quantum MDPs suggested adaptation (probability amplitude ≥0.5) 79% of the time for splitcourses and 100% for continuous-courses. However, the classical MDP had an average adaptation probability of 0.5±0.22 while the quantum algorithm reached 0.76±0.28. In cases where adaptation failed, classical MDP yielded 0.31±0.26 average amplitude while the quantum approach averaged a more optimistic 0.57±0.4, but with high phase fluctuations. Conclusion: Our results demonstrate that quantum machine learning approaches provide a feasible and promising framework for real-time and sequential clinical decision-making in adaptive radiotherapy.« less

  8. Performance evaluation of coherent Ising machines against classical neural networks

    NASA Astrophysics Data System (ADS)

    Haribara, Yoshitaka; Ishikawa, Hitoshi; Utsunomiya, Shoko; Aihara, Kazuyuki; Yamamoto, Yoshihisa

    2017-12-01

    The coherent Ising machine is expected to find a near-optimal solution in various combinatorial optimization problems, which has been experimentally confirmed with optical parametric oscillators and a field programmable gate array circuit. The similar mathematical models were proposed three decades ago by Hopfield et al in the context of classical neural networks. In this article, we compare the computational performance of both models.

  9. A device-oriented optimizer for solving ground state problems on an approximate quantum computer, Part II: Experiments for interacting spin and molecular systems

    NASA Astrophysics Data System (ADS)

    Kandala, Abhinav; Mezzacapo, Antonio; Temme, Kristan; Bravyi, Sergey; Takita, Maika; Chavez-Garcia, Jose; Córcoles, Antonio; Smolin, John; Chow, Jerry; Gambetta, Jay

    Hybrid quantum-classical algorithms can be used to find variational solutions to generic quantum problems. Here, we present an experimental implementation of a device-oriented optimizer that uses superconducting quantum hardware. The experiment relies on feedback between the quantum device and classical optimization software which is robust to measurement noise. Our device-oriented approach uses naturally available interactions for the preparation of trial states. We demonstrate the application of this technique for solving interacting spin and molecular structure problems.

  10. Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis

    NASA Astrophysics Data System (ADS)

    Dion, J.-L.; Tawfiq, I.; Chevallier, G.

    2012-01-01

    This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.

  11. Benchmarking the D-Wave Two

    NASA Astrophysics Data System (ADS)

    Job, Joshua; Wang, Zhihui; Rønnow, Troels; Troyer, Matthias; Lidar, Daniel

    2014-03-01

    We report on experimental work benchmarking the performance of the D-Wave Two programmable annealer on its native Ising problem, and a comparison to available classical algorithms. In this talk we will focus on the comparison with an algorithm originally proposed and implemented by Alex Selby. This algorithm uses dynamic programming to repeatedly optimize over randomly selected maximal induced trees of the problem graph starting from a random initial state. If one is looking for a quantum advantage over classical algorithms, one should compare to classical algorithms which are designed and optimized to maximally take advantage of the structure of the type of problem one is using for the comparison. In that light, this classical algorithm should serve as a good gauge for any potential quantum speedup for the D-Wave Two.

  12. Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.

    PubMed

    Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S

    2017-01-01

    Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.

  13. Practical optimal flight control system design for helicopter aircraft. Volume 1: Technical Report

    NASA Technical Reports Server (NTRS)

    Hofmann, L. G.; Riedel, S. A.; Mcruer, D.

    1980-01-01

    A method by which modern and classical theory techniques may be integrated in a synergistic fashion and used in the design of practical flight control systems is presented. A general procedure is developed, and several illustrative examples are included. Emphasis is placed not only on the synthesis of the design, but on the assessment of the results as well.

  14. Characterization of classical static noise via qubit as probe

    NASA Astrophysics Data System (ADS)

    Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif

    2018-03-01

    The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.

  15. Quad-rotor flight path energy optimization

    NASA Astrophysics Data System (ADS)

    Kemper, Edward

    Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.

  16. Optimizing DER Participation in Inertial and Primary-Frequency Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Guggilam, Swaroop

    This paper develops an approach to enable the optimal participation of distributed energy resources (DERs) in inertial and primary-frequency response alongside conventional synchronous generators. Leveraging a reduced-order model description of frequency dynamics, DERs' synthetic inertias and droop coefficients are designed to meet time-domain performance objectives of frequency overshoot and steady-state regulation. Furthermore, an optimization-based method centered around classical economic dispatch is developed to ensure that DERs share the power injections for inertial- and primary-frequency response in proportion to their power ratings. Simulations for a modified New England test-case system composed of ten synchronous generators and six instances of the IEEEmore » 37-node test feeder with frequency-responsive DERs validate the design strategy.« less

  17. Locking classical correlations in quantum States.

    PubMed

    DiVincenzo, David P; Horodecki, Michał; Leung, Debbie W; Smolin, John A; Terhal, Barbara M

    2004-02-13

    We show that there exist bipartite quantum states which contain a large locked classical correlation that is unlocked by a disproportionately small amount of classical communication. In particular, there are (2n+1)-qubit states for which a one-bit message doubles the optimal classical mutual information between measurement results on the subsystems, from n/2 bits to n bits. This phenomenon is impossible classically. However, states exhibiting this behavior need not be entangled. We study the range of states exhibiting this phenomenon and bound its magnitude.

  18. Finite-size effect on optimal efficiency of heat engines.

    PubMed

    Tajima, Hiroyasu; Hayashi, Masahito

    2017-07-01

    The optimal efficiency of quantum (or classical) heat engines whose heat baths are n-particle systems is given by the strong large deviation. We give the optimal work extraction process as a concrete energy-preserving unitary time evolution among the heat baths and the work storage. We show that our optimal work extraction turns the disordered energy of the heat baths to the ordered energy of the work storage, by evaluating the ratio of the entropy difference to the energy difference in the heat baths and the work storage, respectively. By comparing the statistical mechanical optimal efficiency with the macroscopic thermodynamic bound, we evaluate the accuracy of the macroscopic thermodynamics with finite-size heat baths from the statistical mechanical viewpoint. We also evaluate the quantum coherence effect on the optimal efficiency of the cycle processes without restricting their cycle time by comparing the classical and quantum optimal efficiencies.

  19. Psychophysical Models for Signal Detection with Time Varying Uncertainty. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gai, E.

    1975-01-01

    Psychophysical models for the behavior of the human operator in detection tasks which include change in detectability, correlation between observations and deferred decisions are developed. Classical Signal Detection Theory (SDT) is discussed and its emphasis on the sensory processes is contrasted to decision strategies. The analysis of decision strategies utilizes detection tasks with time varying signal strength. The classical theory is modified to include such tasks and several optimal decision strategies are explored. Two methods of classifying strategies are suggested. The first method is similar to the analysis of ROC curves, while the second is based on the relation between the criterion level (CL) and the detectability. Experiments to verify the analysis of tasks with changes of signal strength are designed. The results show that subjects are aware of changes in detectability and tend to use strategies that involve changes in the CL's.

  20. High-order Newton-penalty algorithms

    NASA Astrophysics Data System (ADS)

    Dussault, Jean-Pierre

    2005-10-01

    Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.

  1. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring

    NASA Astrophysics Data System (ADS)

    Chen, R.; Sun, Y. Y.; Lei, Y.

    2017-12-01

    With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and dependence on the initial values through math formulas. At last this paper conducts experiments using real aviation data, and proves that the new model can effectively solve bottlenecks of the classical method in a certain degree, that is, this paper provides a new idea and solution for faster and more efficient environmental monitoring.

  3. A parallel competitive Particle Swarm Optimization for non-linear first arrival traveltime tomography and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François

    2018-04-01

    Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.

  4. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    PubMed Central

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  5. Solving multi-objective optimization problems in conservation with the reference point method

    PubMed Central

    Dujardin, Yann; Chadès, Iadine

    2018-01-01

    Managing the biodiversity extinction crisis requires wise decision-making processes able to account for the limited resources available. In most decision problems in conservation biology, several conflicting objectives have to be taken into account. Most methods used in conservation either provide suboptimal solutions or use strong assumptions about the decision-maker’s preferences. Our paper reviews some of the existing approaches to solve multi-objective decision problems and presents new multi-objective linear programming formulations of two multi-objective optimization problems in conservation, allowing the use of a reference point approach. Reference point approaches solve multi-objective optimization problems by interactively representing the preferences of the decision-maker with a point in the criteria (objectives) space, called the reference point. We modelled and solved the following two problems in conservation: a dynamic multi-species management problem under uncertainty and a spatial allocation resource management problem. Results show that the reference point method outperforms classic methods while illustrating the use of an interactive methodology for solving combinatorial problems with multiple objectives. The method is general and can be adapted to a wide range of ecological combinatorial problems. PMID:29293650

  6. Design of optimal and ideal 2-D concentrators with the collector immersed in a dielectric tube

    NASA Astrophysics Data System (ADS)

    Minano, J. C.; Ruiz, J. M.; Luque, A.

    1983-12-01

    A method is presented for designing ideal and optimal 2-D concentrators when the collector is placed inside a dielectric tube, for the particular case of a bifacial solar collector. The prototype 2-D (cylindrical geometry) concentrator is the compound parabolic concentrator or CPC, and from the beginning of development, it was found by Winston (1978) that filling up the concentrator with a transparent dielectric medium results in a big improvement of the optical properties. The method reported here is based on the extreme ray principle of design and avoids the use of differential equations by means of a proper appliction of Fermat's principle. One advantage of these concentrators is that they allow the size to be small compared with classical CPCs.

  7. Sequential design of discrete linear quadratic regulators via optimal root-locus techniques

    NASA Technical Reports Server (NTRS)

    Shieh, Leang S.; Yates, Robert E.; Ganesan, Sekar

    1989-01-01

    A sequential method employing classical root-locus techniques has been developed in order to determine the quadratic weighting matrices and discrete linear quadratic regulators of multivariable control systems. At each recursive step, an intermediate unity rank state-weighting matrix that contains some invariant eigenvectors of that open-loop matrix is assigned, and an intermediate characteristic equation of the closed-loop system containing the invariant eigenvalues is created.

  8. Rotation in vibration, optimization, and aeroelastic stability problems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.

    1974-01-01

    The effects of rotation in the areas of vibrations, dynamic stability, optimization, and aeroelasticity were studied. The governing equations of motion for the study of vibration and dynamic stability of a rapidly rotating deformable body were developed starting from the nonlinear theory of elasticity. Some common features such as the limitations of the classical theory of elasticity, the choice of axis system, the property of self-adjointness, the phenomenon of frequency splitting, shortcomings of stability methods as applied to gyroscopic systems, and the effect of internal and external damping on stability in gyroscopic systems are identified and discussed, and are then applied to three specific problems.

  9. Field development planning using simulated annealing - optimal economic well scheduling and placement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckner, B.L.; Xong, X.

    1995-12-31

    A method for optimizing the net present value of a full field development by varying the placement and sequence of production wells is presented. This approach is automated and combines an economics package and Mobil`s in-house simulator, PEGASUS, within a simulated annealing optimization engine. A novel framing of the well placement and scheduling problem as a classic {open_quotes}travelling salesman problem{close_quotes} is required before optimization via simulated annealing can be applied practically. An example of a full field development using this technique shows that non-uniform well spacings are optimal (from an NPV standpoint) when the effects of well interference and variablemore » reservoir properties are considered. Examples of optimizing field NPV with variable well costs also show that non-uniform wells spacings are optimal. Project NPV increases of 25 to 30 million dollars were shown using the optimal, nonuniform development versus reasonable, uniform developments. The ability of this technology to deduce these non-uniform well spacings opens up many potential applications that should materially impact the economic performance of field developments.« less

  10. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm

    NASA Astrophysics Data System (ADS)

    Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.

  11. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    NASA Astrophysics Data System (ADS)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  12. A Direct Algorithm Maple Package of One-Dimensional Optimal System for Group Invariant Solutions

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Han, Zhong; Chen, Yong

    2018-01-01

    To construct the one-dimensional optimal system of finite dimensional Lie algebra automatically, we develop a new Maple package One Optimal System. Meanwhile, we propose a new method to calculate the adjoint transformation matrix and find all the invariants of Lie algebra in spite of Killing form checking possible constraints of each classification. Besides, a new conception called invariance set is raised. Moreover, this Maple package is proved to be more efficiency and precise than before by applying it to some classic examples. Supported by the Global Change Research Program of China under Grant No. 2015CB95390, National Natural Science Foundation of China under Grant Nos. 11675054 and 11435005, and Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things under Grant No. ZF1213

  13. Optimal Shakedown of the Thin-Wall Metal Structures Under Strength and Stiffness Constraints

    NASA Astrophysics Data System (ADS)

    Alawdin, Piotr; Liepa, Liudas

    2017-06-01

    Classical optimization problems of metal structures confined mainly with 1st class cross-sections. But in practice it is common to use the cross-sections of higher classes. In this paper, a new mathematical model for described shakedown optimization problem for metal structures, which elements are designed from 1st to 4th class cross-sections, under variable quasi-static loads is presented. The features of limited plastic redistribution of forces in the structure with thin-walled elements there are taken into account. Authors assume the elastic-plastic flexural buckling in one plane without lateral torsional buckling behavior of members. Design formulae for Methods 1 and 2 for members are analyzed. Structures stiffness constrains are also incorporated in order to satisfy the limit serviceability state requirements. With the help of mathematical programming theory and extreme principles the structure optimization algorithm is developed and justified with the numerical experiment for the metal plane frames.

  14. Shade guide optimization--a novel shade arrangement principle for both ceramic and composite shade guides when identifying composite test objects.

    PubMed

    Østervemb, Niels; Jørgensen, Jette Nedergaard; Hørsted-Bindslev, Preben

    2011-02-01

    The most widely used shade guide for composite materials is made of ceramic and arranged according to a non-proven method. There is a need for a composite shade guide using a scientifically based arrangement principle. To compare the shade tab arrangement of the Vitapan Classical shade guide and an individually made composite shade guide using both the originally proposed arrangement principle and arranged according to ΔE2000 values with hue group division. An individual composite shade guide made from Filtek Supreme XT body colors was compared to the Vitapan Classical shade guide. Twenty-five students matched color samples made from Filtek Supreme XT body colors using the two shade guides arranged after the two proposed principles--four shade guides in total. Age, sequence, gender, time, and number of correct matches were recorded. The proposed visually optimal composite shade guide was both fastest and had the highest number of correct matches. Gender was significantly associated with time used for color sampling but not regarding the number of correct shade matches. A composite shade guide is superior compared to the ceramic Vitapan Classical guide when using composite test objects. A rearrangement of the shade guide according to hue, subdivided according to ΔE2000, significantly reduces the time needed to take a color sample and increases the number of correct shade matches. Total color difference in relation to the lightest tab with hue group division is recommended as a possible and universally applicable mode of tab arrangement in dental color standards. Moreover, a shade guide made of the composite materials itself is to be preferred as both a faster and more accurate method of determining color. © 2011, COPYRIGHT THE AUTHORS. JOURNAL COMPILATION © 2011, WILEY PERIODICALS, INC.

  15. Screening and production of a potent extracellular Arthrobacter creatinolyticus urease for determination of heavy metal ions.

    PubMed

    Ramesh, Rajendran; Aarthy, Mayilvahanan; Gowthaman, Marichetti Kuppuswami; Gabrovska, Katya; Godjevargova, Tzonka; Kamini, Numbi Ramudu

    2014-04-01

    This paper describes the isolation of a potent extracellular urease producing microorganism, identified by 16S rRNA as Arthrobacter creatinolyticus MTCC 5604 and its medium optimization by classical one-factor-at-a-time method and central composite rotatable design (CCRD), a tool of response surface methodology (RSM). An optimal activity of 9.0 U ml(-1) was obtained by classical method and statistical optimization of the medium resulted in an activity of 17.35 U ml(-1) at 48 h and 30 °C. This activity was 4.91 times greater than the initial activity (3.53 U ml(-1) ) from the basal medium and the enzyme showed maximum activity at pH 8.0 and 60 °C and was stable at pH 7.0-9.0 and temperatures up to 50 °C. Furthermore, the enzyme was assessed for its activity reduction by determining the inhibitory concentration (IC50 ) of heavy metal ions and the inhibition of urease was in the order of Cu(II) > Cd(II) > Zn(II) > Ni(II). Urease was highly sensitive to Cu(II) and its inhibition was 94% and 100% in model solutions containing a mixture of Cu(II) with heavy metal ions Cd(II) and Zn(II), respectively. The results of these studies suggested that the enzyme could be utilized as sensors to determine the levels of Cu(II) ions in industrial effluents, contaminated soil and ground water. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. [Application of rapid PCR to authenticate medicinal snakes].

    PubMed

    Chen, Kang; Jiang, Chao; Yuan, Yuan; Huang, Lu-Qi; Li, Man

    2014-10-01

    To obtained an accurate, rapid and efficient method for authenticate medicinal snakes listed in Chinese Pharmacopoeia (Zaocysd humnades, Bungarus multicinctus, Agkistrodon acutus), a rapid PCR method for authenticate snakes and its adulterants was established based on the classic molecular authentication methods. DNA was extracted by alkaline lysis and the specific primers were amplified by two-steps PCR amplification method. The denatured and annealing temperature and cycle numbers were optimized. When 100 x SYBR Green I was added in the PCR product, strong green fluorescence was visualized under 365 nm UV whereas adulterants without. The whole process can complete in 30-45 minutes. The established method provides the technical support for authentication of the snakes on field.

  17. Evolutionary Bi-objective Optimization for Bulldozer and Its Blade in Soil Cutting

    NASA Astrophysics Data System (ADS)

    Sharma, Deepak; Barakat, Nada

    2018-02-01

    An evolutionary optimization approach is adopted in this paper for simultaneously achieving the economic and productive soil cutting. The economic aspect is defined by minimizing the power requirement from the bulldozer, and the soil cutting is made productive by minimizing the time of soil cutting. For determining the power requirement, two force models are adopted from the literature to quantify the cutting force on the blade. Three domain-specific constraints are also proposed, which are limiting the power from the bulldozer, limiting the maximum force on the bulldozer blade and achieving the desired production rate. The bi-objective optimization problem is solved using five benchmark multi-objective evolutionary algorithms and one classical optimization technique using the ɛ-constraint method. The Pareto-optimal solutions are obtained with the knee-region. Further, the post-optimal analysis is performed on the obtained solutions to decipher relationships among the objectives and decision variables. Such relationships are later used for making guidelines for selecting the optimal set of input parameters. The obtained results are then compared with the experiment results from the literature that show a close agreement among them.

  18. Practical advantages of evolutionary computation

    NASA Astrophysics Data System (ADS)

    Fogel, David B.

    1997-10-01

    Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.

  19. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  20. Minimization of Poisson’s ratio in anti-tetra-chiral two-phase structure

    NASA Astrophysics Data System (ADS)

    Idczak, E.; Strek, T.

    2017-10-01

    One of the most important goal of modern material science is designing structures which exhibit appropriate properties. These properties can be obtained by optimization methods which often use numerical calculations e.g. finite element method (FEM). This paper shows the results of topological optimization which is used to obtain the greatest possible negative Poisson’s ratio of the two-phase composite. The shape is anti-tetra-chiral two-dimensional unit cell of the whole lattice structure which has negative Poisson’s ratio when it is built of one solid material. Two phase used in optimization are two solid materials with positive Poisson’s ratio and Young’s modulus. Distribution of reinforcement hard material inside soft matrix material in anti-tetra-chiral domain influenced mechanical properties of structure. The calculations shows that the resultant structure has negative Poisson’s ratio even eight times smaller than homogenous anti-tetra chiral structure made of classic one material. In the analysis FEM is connected with algorithm Method of Moving Asymptote (MMA). The results of materials’ properties parameters are described and calculated by means of shape interpolation scheme - Solid Isotropic Material with Penalization (SIMP) method.

  1. Determination of optimal self-drive tourism route using the orienteering problem method

    NASA Astrophysics Data System (ADS)

    Hashim, Zakiah; Ismail, Wan Rosmanira; Ahmad, Norfaieqah

    2013-04-01

    This paper was conducted to determine the optimal travel routes for self-drive tourism based on the allocation of time and expense by maximizing the amount of attraction scores assigned to each city involved. Self-drive tourism represents a type of tourism where tourists hire or travel by their own vehicle. It only involves a tourist destination which can be linked with a network of roads. Normally, the traveling salesman problem (TSP) and multiple traveling salesman problems (MTSP) method were used in the minimization problem such as determination the shortest time or distance traveled. This paper involved an alternative approach for maximization method which is maximize the attraction scores and tested on tourism data for ten cities in Kedah. A set of priority scores are used to set the attraction score at each city. The classical approach of the orienteering problem was used to determine the optimal travel route. This approach is extended to the team orienteering problem and the two methods were compared. These two models have been solved by using LINGO12.0 software. The results indicate that the model involving the team orienteering problem provides a more appropriate solution compared to the orienteering problem model.

  2. Gossip algorithms in quantum networks

    NASA Astrophysics Data System (ADS)

    Siomau, Michael

    2017-01-01

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up - in the best case exponentially - the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication.

  3. Adaptive critic designs for optimal control of uncertain nonlinear systems with unmatched interconnections.

    PubMed

    Yang, Xiong; He, Haibo

    2018-05-26

    In this paper, we develop a novel optimal control strategy for a class of uncertain nonlinear systems with unmatched interconnections. To begin with, we present a stabilizing feedback controller for the interconnected nonlinear systems by modifying an array of optimal control laws of auxiliary subsystems. We also prove that this feedback controller ensures a specified cost function to achieve optimality. Then, under the framework of adaptive critic designs, we use critic networks to solve the Hamilton-Jacobi-Bellman equations associated with auxiliary subsystem optimal control laws. The critic network weights are tuned through the gradient descent method combined with an additional stabilizing term. By using the newly established weight tuning rules, we no longer need the initial admissible control condition. In addition, we demonstrate that all signals in the closed-loop auxiliary subsystems are stable in the sense of uniform ultimate boundedness by using classic Lyapunov techniques. Finally, we provide an interconnected nonlinear plant to validate the present control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Complexity of the Quantum Adiabatic Algorithm

    NASA Astrophysics Data System (ADS)

    Hen, Itay

    2013-03-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorihms. Here, we discuss several aspects of the quantum adiabatic algorithm: We analyze the efficiency of the algorithm on several ``hard'' (NP) computational problems. Studying the size dependence of the typical minimum energy gap of the Hamiltonians of these problems using quantum Monte Carlo methods, we find that while for most problems the minimum gap decreases exponentially with the size of the problem, indicating that the QAA is not more efficient than existing classical search algorithms, for other problems there is evidence to suggest that the gap may be polynomial near the phase transition. We also discuss applications of the QAA to ``real life'' problems and how they can be implemented on currently available (albeit prototypical) quantum hardware such as ``D-Wave One'', that impose serious restrictions as to which type of problems may be tested. Finally, we discuss different approaches to find improved implementations of the algorithm such as local adiabatic evolution, adaptive methods, local search in Hamiltonian space and others.

  5. Detection of the ice assertion on aircraft using empirical mode decomposition enhanced by multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Bagherzadeh, Seyed Amin; Asadi, Davood

    2017-05-01

    In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.

  6. Quantum-enhanced reinforcement learning for finite-episode games with discrete state spaces

    NASA Astrophysics Data System (ADS)

    Neukart, Florian; Von Dollen, David; Seidel, Christian; Compostella, Gabriele

    2017-12-01

    Quantum annealing algorithms belong to the class of metaheuristic tools, applicable for solving binary optimization problems. Hardware implementations of quantum annealing, such as the quantum annealing machines produced by D-Wave Systems, have been subject to multiple analyses in research, with the aim of characterizing the technology's usefulness for optimization and sampling tasks. Here, we present a way to partially embed both Monte Carlo policy iteration for finding an optimal policy on random observations, as well as how to embed n sub-optimal state-value functions for approximating an improved state-value function given a policy for finite horizon games with discrete state spaces on a D-Wave 2000Q quantum processing unit (QPU). We explain how both problems can be expressed as a quadratic unconstrained binary optimization (QUBO) problem, and show that quantum-enhanced Monte Carlo policy evaluation allows for finding equivalent or better state-value functions for a given policy with the same number episodes compared to a purely classical Monte Carlo algorithm. Additionally, we describe a quantum-classical policy learning algorithm. Our first and foremost aim is to explain how to represent and solve parts of these problems with the help of the QPU, and not to prove supremacy over every existing classical policy evaluation algorithm.

  7. Quantum-Inspired Maximizer

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).

  8. A survival tree method for the analysis of discrete event times in clinical and epidemiological studies.

    PubMed

    Schmid, Matthias; Küchenhoff, Helmut; Hoerauf, Achim; Tutz, Gerhard

    2016-02-28

    Survival trees are a popular alternative to parametric survival modeling when there are interactions between the predictor variables or when the aim is to stratify patients into prognostic subgroups. A limitation of classical survival tree methodology is that most algorithms for tree construction are designed for continuous outcome variables. Hence, classical methods might not be appropriate if failure time data are measured on a discrete time scale (as is often the case in longitudinal studies where data are collected, e.g., quarterly or yearly). To address this issue, we develop a method for discrete survival tree construction. The proposed technique is based on the result that the likelihood of a discrete survival model is equivalent to the likelihood of a regression model for binary outcome data. Hence, we modify tree construction methods for binary outcomes such that they result in optimized partitions for the estimation of discrete hazard functions. By applying the proposed method to data from a randomized trial in patients with filarial lymphedema, we demonstrate how discrete survival trees can be used to identify clinically relevant patient groups with similar survival behavior. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Loop shaping design for tracking performance in machine axes.

    PubMed

    Schinstock, Dale E; Wei, Zhouhong; Yang, Tao

    2006-01-01

    A modern interpretation of classical loop shaping control design methods is presented in the context of tracking control for linear motor stages. Target applications include noncontacting machines such as laser cutters and markers, water jet cutters, and adhesive applicators. The methods are directly applicable to the common PID controller and are pertinent to many electromechanical servo actuators other than linear motors. In addition to explicit design techniques a PID tuning algorithm stressing the importance of tracking is described. While the theory behind these techniques is not new, the analysis of their application to modern systems is unique in the research literature. The techniques and results should be important to control practitioners optimizing PID controller designs for tracking and in comparing results from classical designs to modern techniques. The methods stress high-gain controller design and interpret what this means for PID. Nothing in the methods presented precludes the addition of feedforward control methods for added improvements in tracking. Laboratory results from a linear motor stage demonstrate that with large open-loop gain very good tracking performance can be achieved. The resultant tracking errors compare very favorably to results from similar motions on similar systems that utilize much more complicated controllers.

  10. Measurement of delta13C and delta18O Isotopic Ratios of CaCO3 by a Thermoquest Finnigan GasBench II Delta Plus XL Continous Flow Isotope Ratio Mass Spectrometer with Application to Devils Hole Core DH-11 Calcite

    USGS Publications Warehouse

    Revesz, Kinga M.; Landwehr, Jurate Maciunas; Keybl, Jaroslav Edward

    2001-01-01

    A new method was developed to analyze the stable carbon and oxygen isotope ratios of small samples (400?20 ?g) of calcium carbonate. This new method streamlines the classical phosphoric acid - calcium carbonate (H3PO4 - CaCO3) reaction method by making use of a Thermoquest-Finnigan GasBench II preparation device and a Delta Plus XL continuous flow isotope ratio mass spectrometer. To obtain reproducible and accurate results, optimal conditions for the H3PO4 - CaCO3 reaction had to be determined. At the acid-carbonate reaction temperature suggested by the equipment manufacturer, the oxygen isotope ratio results were unsatisfactory (standard deviation () greater than 1.5 per mill), probably because of a secondary reaction. When the acid-carbonate reaction temperature was lowered to 26?C and the reaction time was increased to 24 hours, the precision of the carbon and oxygen isotope ratios for duplicate analyses improved to 0.1 and 0.2 per mill, respectively. The method was tested by analyzing calcite from Devils Hole, Nevada, which was formed by precipitation from ground water onto the walls of a sub-aqueous cavern during the last 500,000 years. Isotope-ratio values previously had been obtained by the classical method for Devils Hole core DH-11. The DH-11 core had been recently re-sampled, and isotope-ratio values were obtained using this new method. The results were comparable to those obtained by the classical method. The consistency of the isotopic results is such that an alignment offset could be identified in the re-sampled core material, a cutting error that was then independently confirmed. The reproducibility of the isotopic values is demonstrated by a correlation of approximately 0.96 for both isotopes, after correcting for an alignment offset. This result indicates that the new method is a viable alternative to the classical method. In particular, the new method requires less sample material permitting finer resolution and allows automation of some processes resulting in considerable timesavings.

  11. The Artificial Neural Networks Based on Scalarization Method for a Class of Bilevel Biobjective Programming Problem

    PubMed Central

    Chen, Zhong; Liu, June; Li, Xiong

    2017-01-01

    A two-stage artificial neural network (ANN) based on scalarization method is proposed for bilevel biobjective programming problem (BLBOP). The induced set of the BLBOP is firstly expressed as the set of minimal solutions of a biobjective optimization problem by using scalar approach, and then the whole efficient set of the BLBOP is derived by the proposed two-stage ANN for exploring the induced set. In order to illustrate the proposed method, seven numerical examples are tested and compared with results in the classical literature. Finally, a practical problem is solved by the proposed algorithm. PMID:29312446

  12. Self-consistent self-interaction corrected density functional theory calculations for atoms using Fermi-Löwdin orbitals: Optimized Fermi-orbital descriptors for Li-Kr

    NASA Astrophysics Data System (ADS)

    Kao, Der-you; Withanage, Kushantha; Hahn, Torsten; Batool, Javaria; Kortus, Jens; Jackson, Koblar

    2017-10-01

    In the Fermi-Löwdin orbital method for implementing self-interaction corrections (FLO-SIC) in density functional theory (DFT), the local orbitals used to make the corrections are generated in a unitary-invariant scheme via the choice of the Fermi orbital descriptors (FODs). These are M positions in 3-d space (for an M-electron system) that can be loosely thought of as classical electron positions. The orbitals that minimize the DFT energy including the SIC are obtained by finding optimal positions for the FODs. In this paper, we present optimized FODs for the atoms from Li-Kr obtained using an unbiased search method and self-consistent FLO-SIC calculations. The FOD arrangements display a clear shell structure that reflects the principal quantum numbers of the orbitals. We describe trends in the FOD arrangements as a function of atomic number. FLO-SIC total energies for the atoms are presented and are shown to be in close agreement with the results of previous SIC calculations that imposed explicit constraints to determine the optimal local orbitals, suggesting that FLO-SIC yields the same solutions for atoms as these computationally demanding earlier methods, without invoking the constraints.

  13. Convex Optimization over Classes of Multiparticle Entanglement

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Gühne, Otfried

    2018-02-01

    A well-known strategy to characterize multiparticle entanglement utilizes the notion of stochastic local operations and classical communication (SLOCC), but characterizing the resulting entanglement classes is difficult. Given a multiparticle quantum state, we first show that Gilbert's algorithm can be adapted to prove separability or membership in a certain entanglement class. We then present two algorithms for convex optimization over SLOCC classes. The first algorithm uses a simple gradient approach, while the other one employs the accelerated projected-gradient method. For demonstration, the algorithms are applied to the likelihood-ratio test using experimental data on bound entanglement of a noisy four-photon Smolin state [Phys. Rev. Lett. 105, 130501 (2010), 10.1103/PhysRevLett.105.130501].

  14. A software methodology for compiling quantum programs

    NASA Astrophysics Data System (ADS)

    Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias

    2018-04-01

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.

  15. Numerical Leak Detection in a Pipeline Network of Complex Structure with Unsteady Flow

    NASA Astrophysics Data System (ADS)

    Aida-zade, K. R.; Ashrafova, E. R.

    2017-12-01

    An inverse problem for a pipeline network of complex loopback structure is solved numerically. The problem is to determine the locations and amounts of leaks from unsteady flow characteristics measured at some pipeline points. The features of the problem include impulse functions involved in a system of hyperbolic differential equations, the absence of classical initial conditions, and boundary conditions specified as nonseparated relations between the states at the endpoints of adjacent pipeline segments. The problem is reduced to a parametric optimal control problem without initial conditions, but with nonseparated boundary conditions. The latter problem is solved by applying first-order optimization methods. Results of numerical experiments are presented.

  16. Communication theory of quantum systems. Ph.D. Thesis, 1970

    NASA Technical Reports Server (NTRS)

    Yuen, H. P. H.

    1971-01-01

    Communication theory problems incorporating quantum effects for optical-frequency applications are discussed. Under suitable conditions, a unique quantum channel model corresponding to a given classical space-time varying linear random channel is established. A procedure is described by which a proper density-operator representation applicable to any receiver configuration can be constructed directly from the channel output field. Some examples illustrating the application of our methods to the development of optical quantum channel representations are given. Optimizations of communication system performance under different criteria are considered. In particular, certain necessary and sufficient conditions on the optimal detector in M-ary quantum signal detection are derived. Some examples are presented. Parameter estimation and channel capacity are discussed briefly.

  17. A new nonlinear conjugate gradient coefficient under strong Wolfe-Powell line search

    NASA Astrophysics Data System (ADS)

    Mohamed, Nur Syarafina; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    A nonlinear conjugate gradient method (CG) plays an important role in solving a large-scale unconstrained optimization problem. This method is widely used due to its simplicity. The method is known to possess sufficient descend condition and global convergence properties. In this paper, a new nonlinear of CG coefficient βk is presented by employing the Strong Wolfe-Powell inexact line search. The new βk performance is tested based on number of iterations and central processing unit (CPU) time by using MATLAB software with Intel Core i7-3470 CPU processor. Numerical experimental results show that the new βk converge rapidly compared to other classical CG method.

  18. A online credit evaluation method based on AHP and SPA

    NASA Astrophysics Data System (ADS)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  19. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification

    PubMed Central

    Wen, Tingxi; Zhang, Zhongnan

    2017-01-01

    Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789

  20. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  1. Structural optimization under overhang constraints imposed by additive manufacturing technologies

    NASA Astrophysics Data System (ADS)

    Allaire, G.; Dapogny, C.; Estevez, R.; Faure, A.; Michailidis, G.

    2017-12-01

    This article addresses one of the major constraints imposed by additive manufacturing processes on shape optimization problems - that of overhangs, i.e. large regions hanging over void without sufficient support from the lower structure. After revisiting the 'classical' geometric criteria used in the literature, based on the angle between the structural boundary and the build direction, we propose a new mechanical constraint functional, which mimics the layer by layer construction process featured by additive manufacturing technologies, and thereby appeals to the physical origin of the difficulties caused by overhangs. This constraint, as well as some variants, is precisely defined; their shape derivatives are computed in the sense of Hadamard's method, and numerical strategies are extensively discussed, in two and three space dimensions, to efficiently deal with the appearance of overhang features in the course of shape optimization processes.

  2. Investigation of Low-Reynolds-Number Rocket Nozzle Design Using PNS-Based Optimization Procedure

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Moin; Korte, John J.

    1996-01-01

    An optimization approach to rocket nozzle design, based on computational fluid dynamics (CFD) methodology, is investigated for low-Reynolds-number cases. This study is undertaken to determine the benefits of this approach over those of classical design processes such as Rao's method. A CFD-based optimization procedure, using the parabolized Navier-Stokes (PNS) equations, is used to design conical and contoured axisymmetric nozzles. The advantage of this procedure is that it accounts for viscosity during the design process; other processes make an approximated boundary-layer correction after an inviscid design is created. Results showed significant improvement in the nozzle thrust coefficient over that of the baseline case; however, the unusual nozzle design necessitates further investigation of the accuracy of the PNS equations for modeling expanding flows with thick laminar boundary layers.

  3. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  4. Improved methodologies for the preparation of highly substituted pyridines.

    PubMed

    Fernández Sainz, Yolanda; Raw, Steven A; Taylor, Richard J K

    2005-11-25

    [reaction: see text] Two separate strategies have been developed for the preparation of highly substituted pyridines from 1,2,4-triazines via the inverse-electron-demand Diels-Alder reaction: a microwave-promoted, solvent-free procedure and a tethered imine-enamine (TIE) approach. Both routes avoid the need for a discrete aromatization step and offer significant advantages over the classical methods, giving a wide variety of tri-, tetra-, and penta-substituted pyridines in high, optimized yields.

  5. Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.

    2016-10-01

    Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.

  6. Optimal integral force feedback for active vibration control

    NASA Astrophysics Data System (ADS)

    Teo, Yik R.; Fleming, Andrew J.

    2015-11-01

    This paper proposes an improvement to Integral Force Feedback (IFF), which is a popular method for active vibration control of structures and mechanical systems. Benefits of IFF include robustness, guaranteed stability and simplicity. However, the maximum damping performance is dependent on the stiffness of the system; hence, some systems cannot be adequately controlled. In this paper, an improvement to the classical force feedback control scheme is proposed. The improved method achieves arbitrary damping for any mechanical system by introducing a feed-through term. The proposed improvement is experimentally demonstrated by actively damping an objective lens assembly for a high-speed confocal microscope.

  7. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazareth, D; Spaans, J

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less

  8. Linear Quantum Systems: Non-Classical States and Robust Stability

    DTIC Science & Technology

    2016-06-29

    quantum linear systems subject to non-classical quantum fields. The major outcomes of this project are (i) derivation of quantum filtering equations for...derivation of quantum filtering equations for systems non-classical input states including single photon states, (ii) determination of how linear...history going back some 50 years, to the birth of modern control theory with Kalman’s foundational work on filtering and LQG optimal control

  9. Performance Analysis and Optimization of the Winnow Secret Key Reconciliation Protocol

    DTIC Science & Technology

    2011-06-01

    use in a quantum key system can be defined in two ways :  The number of messages passed between Alice and Bob  The...classical and quantum environment. Post- quantum cryptography , which is generally used to describe classical quantum -resilient protocols, includes...composed of a one- way quantum channel and a two - way classical channel. Owing to the physics of the channel, the quantum channel is subject to

  10. Accuracy limit of rigid 3-point water models

    NASA Astrophysics Data System (ADS)

    Izadi, Saeed; Onufriev, Alexey V.

    2016-08-01

    Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.

  11. Efficient quantum repeater with respect to both entanglement-concentration rate and complexity of local operations and classical communication

    NASA Astrophysics Data System (ADS)

    Su, Zhaofeng; Guan, Ji; Li, Lvzhou

    2018-01-01

    Quantum entanglement is an indispensable resource for many significant quantum information processing tasks. However, in practice, it is difficult to distribute quantum entanglement over a long distance, due to the absorption and noise in quantum channels. A solution to this challenge is a quantum repeater, which can extend the distance of entanglement distribution. In this scheme, the time consumption of classical communication and local operations takes an important place with respect to time efficiency. Motivated by this observation, we consider a basic quantum repeater scheme that focuses on not only the optimal rate of entanglement concentration but also the complexity of local operations and classical communication. First, we consider the case where two different two-qubit pure states are initially distributed in the scenario. We construct a protocol with the optimal entanglement-concentration rate and less consumption of local operations and classical communication. We also find a criterion for the projective measurements to achieve the optimal probability of creating a maximally entangled state between the two ends. Second, we consider the case in which two general pure states are prepared and general measurements are allowed. We get an upper bound on the probability for a successful measurement operation to produce a maximally entangled state without any further local operations.

  12. Airplane numerical simulation for the rapid prototyping process

    NASA Astrophysics Data System (ADS)

    Roysdon, Paul F.

    Airplane Numerical Simulation for the Rapid Prototyping Process is a comprehensive research investigation into the most up-to-date methods for airplane development and design. Uses of modern engineering software tools, like MatLab and Excel, are presented with examples of batch and optimization algorithms which combine the computing power of MatLab with robust aerodynamic tools like XFOIL and AVL. The resulting data is demonstrated in the development and use of a full non-linear six-degrees-of-freedom simulator. The applications for this numerical tool-box vary from un-manned aerial vehicles to first-order analysis of manned aircraft. A Blended-Wing-Body airplane is used for the analysis to demonstrate the flexibility of the code from classic wing-and-tail configurations to less common configurations like the blended-wing-body. This configuration has been shown to have superior aerodynamic performance -- in contrast to their classic wing-and-tube fuselage counterparts -- and have reduced sensitivity to aerodynamic flutter as well as potential for increased engine noise abatement. Of course without a classic tail elevator to damp the nose up pitching moment, and the vertical tail rudder to damp the yaw and possible rolling aerodynamics, the challenges in lateral roll and yaw stability, as well as pitching moment are not insignificant. This thesis work applies the tools necessary to perform the airplane development and optimization on a rapid basis, demonstrating the strength of this tool through examples and comparison of the results to similar airplane performance characteristics published in literature.

  13. Two Reconfigurable Flight-Control Design Methods: Robust Servomechanism and Control Allocation

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Lu, Ping; Wu, Zheng-Lu; Bahm, Cathy

    2001-01-01

    Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.

  14. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization

    DOE PAGES

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...

    2016-01-01

    This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less

  15. Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization

    NASA Astrophysics Data System (ADS)

    Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar

    2017-04-01

    Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.

  16. Automatic weight determination in nonlinear model predictive control of wind turbines using swarm optimization technique

    NASA Astrophysics Data System (ADS)

    Tofighi, Elham; Mahdizadeh, Amin

    2016-09-01

    This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.

  17. Noise in Charge Amplifiers— A gm/ID Approach

    NASA Astrophysics Data System (ADS)

    Alvarez, Enrique; Avila, Diego; Campillo, Hernan; Dragone, Angelo; Abusleme, Angel

    2012-10-01

    Charge amplifiers represent the standard solution to amplify signals from capacitive detectors in high energy physics experiments. In a typical front-end, the noise due to the charge amplifier, and particularly from its input transistor, limits the achievable resolution. The classic approach to attenuate noise effects in MOSFET charge amplifiers is to use the maximum power available, to use a minimum-length input device, and to establish the input transistor width in order to achieve the optimal capacitive matching at the input node. These conclusions, reached by analysis based on simple noise models, lead to sub-optimal results. In this work, a new approach on noise analysis for charge amplifiers based on an extension of the gm/ID methodology is presented. This method combines circuit equations and results from SPICE simulations, both valid for all operation regions and including all noise sources. The method, which allows to find the optimal operation point of the charge amplifier input device for maximum resolution, shows that the minimum device length is not necessarily the optimal, that flicker noise is responsible for the non-monotonic noise versus current function, and provides a deeper insight on the noise limits mechanism from an alternative and more design-oriented point of view.

  18. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  19. Analogue of the Kelley condition for optimal systems with retarded control

    NASA Astrophysics Data System (ADS)

    Mardanov, Misir J.; Melikov, Telman K.

    2017-07-01

    In this paper, we consider an optimal control problem with retarded control and study a larger class of singular (in the classical sense) controls. The Kelley and equality type optimality conditions are obtained. To prove our main results, we use the Legendre polynomials as variations of control.

  20. A Problem on Optimal Transportation

    ERIC Educational Resources Information Center

    Cechlarova, Katarina

    2005-01-01

    Mathematical optimization problems are not typical in the classical curriculum of mathematics. In this paper we show how several generalizations of an easy problem on optimal transportation were solved by gifted secondary school pupils in a correspondence mathematical seminar, how they can be used in university courses of linear programming and…

  1. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  2. Reconfigurable Flight Control Designs With Application to the X-33 Vehicle

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Lu, Ping; Wu, Zhenglu

    1999-01-01

    Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the right body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.

  3. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    NASA Astrophysics Data System (ADS)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  4. Elements of an algorithm for optimizing a parameter-structural neural network

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.

  5. New Approaches to Minimum-Energy Design of Integer- and Fractional-Order Perfect Control Algorithms

    NASA Astrophysics Data System (ADS)

    Hunek, Wojciech P.; Wach, Łukasz

    2017-10-01

    In this paper the new methods concerning the energy-based minimization of the perfect control inputs is presented. For that reason the multivariable integer- and fractional-order models are applied which can be used for describing a various real world processes. Up to now, the classical approaches have been used in forms of minimum-norm/least squares inverses. Notwithstanding, the above-mentioned tool do not guarantee the optimal control corresponding to optimal input energy. Therefore the new class of inversebased methods has been introduced, in particular the new σ- and H-inverse of nonsquare parameter and polynomial matrices. Thus a proposed solution remarkably outperforms the typical ones in systems where the control runs can be understood in terms of different physical quantities, for example heat and mass transfer, electricity etc. A simulation study performed in Matlab/Simulink environment confirms the big potential of the new energy-based approaches.

  6. Jerk Minimization Method for Vibration Control in Buildings

    NASA Technical Reports Server (NTRS)

    Abatan, Ayo O.; Yao, Leummim

    1997-01-01

    In many vibration minimization control problems for high rise buildings subject to strong earthquake loads, the emphasis has been on a combination of minimizing the displacement, the velocity and the acceleration of the motion of the building. In most cases, the accelerations that are involved are not necessarily large but the change in them (jerk) are abrupt. These changes in magnitude or direction are responsible for most building damage and also create discomfort like motion sickness for inhabitants of these structures because of the element of surprise. We propose a method of minimizing also the jerk which is the sudden change in acceleration or the derivative of the acceleration using classical linear quadratic optimal controls. This was done through the introduction of a quadratic performance index involving the cost due to the jerk; a special change of variable; and using the jerk as a control variable. The values of the optimal control are obtained using the Riccati equation.

  7. Boosting quantum annealer performance via sample persistence

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili

    2017-07-01

    We propose a novel method for reducing the number of variables in quadratic unconstrained binary optimization problems, using a quantum annealer (or any sampler) to fix the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are usually much easier for the quantum annealer to solve, due to their being smaller and consisting of disconnected components. This approach significantly increases the success rate and number of observations of the best known energy value in samples obtained from the quantum annealer, when compared with calling the quantum annealer without using it, even when using fewer annealing cycles. Use of the method results in a considerable improvement in success metrics even for problems with high-precision couplers and biases, which are more challenging for the quantum annealer to solve. The results are further enhanced by applying the method iteratively and combining it with classical pre-processing. We present results for both Chimera graph-structured problems and embedded problems from a real-world application.

  8. Many-body kinetics of dynamic nuclear polarization by the cross effect

    NASA Astrophysics Data System (ADS)

    Karabanov, A.; Wiśniewski, D.; Raimondi, F.; Lesanovsky, I.; Köckenberger, W.

    2018-03-01

    Dynamic nuclear polarization (DNP) is an out-of-equilibrium method for generating nonthermal spin polarization which provides large signal enhancements in modern diagnostic methods based on nuclear magnetic resonance. A particular instance is cross-effect DNP, which involves the interaction of two coupled electrons with the nuclear spin ensemble. Here we develop a theory for this important DNP mechanism and show that the nonequilibrium nuclear polarization buildup is effectively driven by three-body incoherent Markovian dissipative processes involving simultaneous state changes of two electrons and one nucleus. We identify different parameter regimes for effective polarization transfer and discuss under which conditions the polarization dynamics can be simulated by classical kinetic Monte Carlo methods. Our theoretical approach allows simulations of the polarization dynamics on an individual spin level for ensembles consisting of hundreds of nuclear spins. The insight obtained by these simulations can be used to find optimal experimental conditions for cross-effect DNP and to design tailored radical systems that provide optimal DNP efficiency.

  9. A Novel Hybrid Clonal Selection Algorithm with Combinatorial Recombination and Modified Hypermutation Operators for Global Optimization

    PubMed Central

    Lin, Jingjing; Jing, Honglei

    2016-01-01

    Artificial immune system is one of the most recently introduced intelligence methods which was inspired by biological immune system. Most immune system inspired algorithms are based on the clonal selection principle, known as clonal selection algorithms (CSAs). When coping with complex optimization problems with the characteristics of multimodality, high dimension, rotation, and composition, the traditional CSAs often suffer from the premature convergence and unsatisfied accuracy. To address these concerning issues, a recombination operator inspired by the biological combinatorial recombination is proposed at first. The recombination operator could generate the promising candidate solution to enhance search ability of the CSA by fusing the information from random chosen parents. Furthermore, a modified hypermutation operator is introduced to construct more promising and efficient candidate solutions. A set of 16 common used benchmark functions are adopted to test the effectiveness and efficiency of the recombination and hypermutation operators. The comparisons with classic CSA, CSA with recombination operator (RCSA), and CSA with recombination and modified hypermutation operator (RHCSA) demonstrate that the proposed algorithm significantly improves the performance of classic CSA. Moreover, comparison with the state-of-the-art algorithms shows that the proposed algorithm is quite competitive. PMID:27698662

  10. Principal Curves on Riemannian Manifolds.

    PubMed

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  11. Second-order variational equations for N-body simulations

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Tamayo, Daniel

    2016-07-01

    First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.

  12. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  13. Potential testing of reprocessing procedures by real-time polymerase chain reaction: A multicenter study of colonoscopy devices.

    PubMed

    Valeriani, Federica; Agodi, Antonella; Casini, Beatrice; Cristina, Maria Luisa; D'Errico, Marcello Mario; Gianfranceschi, Gianluca; Liguori, Giorgio; Liguori, Renato; Mucci, Nicolina; Mura, Ida; Pasquarella, Cesira; Piana, Andrea; Sotgiu, Giovanni; Privitera, Gaetano; Protano, Carmela; Quattrocchi, Annalisa; Ripabelli, Giancarlo; Rossini, Angelo; Spagnolo, Anna Maria; Tamburro, Manuela; Tardivo, Stefano; Veronesi, Licia; Vitali, Matteo; Romano Spica, Vincenzo

    2018-02-01

    Reprocessing of endoscopes is key to preventing cross-infection after colonoscopy. Culture-based methods are recommended for monitoring, but alternative and rapid approaches are needed to improve surveillance and reduce turnover times. A molecular strategy based on detection of residual traces from gut microbiota was developed and tested using a multicenter survey. A simplified sampling and DNA extraction protocol using nylon-tipped flocked swabs was optimized. A multiplex real-time polymerase chain reaction (PCR) test was developed that targeted 6 bacteria genes that were amplified in 3 mixes. The method was validated by interlaboratory tests involving 5 reference laboratories. Colonoscopy devices (n = 111) were sampled in 10 Italian hospitals. Culture-based microbiology and metagenomic tests were performed to verify PCR data. The sampling method was easily applied in all 10 endoscopy units and the optimized DNA extraction and amplification protocol was successfully performed by all of the involved laboratories. This PCR-based method allowed identification of both contaminated (n = 59) and fully reprocessed endoscopes (n = 52) with high sensibility (98%) and specificity (98%), within 3-4 hours, in contrast to the 24-72 hours needed for a classic microbiology test. Results were confirmed by next-generation sequencing and classic microbiology. A novel approach for monitoring reprocessing of colonoscopy devices was developed and successfully applied in a multicenter survey. The general principle of tracing biological fluids through microflora DNA amplification was successfully applied and may represent a promising approach for hospital hygiene. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  14. Solving a Higgs optimization problem with quantum annealing for machine learning.

    PubMed

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-18

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  15. Solving a Higgs optimization problem with quantum annealing for machine learning

    NASA Astrophysics Data System (ADS)

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-01

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  16. Generalizations of the Alternating Direction Method of Multipliers for Large-Scale and Distributed Optimization

    DTIC Science & Technology

    2014-05-01

    exact one is solved later — as- signed as step 5 of Algorithm 2 — because at each iteration , the ADMM updates the variables in the Gauss - Seidel ...Jacobi ADMM (see Algo- rithm 5 below). Unlike the Gauss - Seidel ADMM, the Jacobi ADMM updates all the 70 blocks in parallel at every iteration : xk+1i...that extending ADMM straightforwardly from the classic Gauss - Seidel setting to the Jacobi setting, from two blocks to multiple blocks, will preserve

  17. The Optimal Convergence Rate of the p-Version of the Finite Element Method.

    DTIC Science & Technology

    1985-10-01

    commercial and research codes. The p-version and h-p versions are new developments. There is only one commercial code, the system PROBE ( Noetic Tech, St...Louis). The theoretical aspects have been studied only recently. The first theoretical paper appeared in 1981 (see [7)). See also [1), [7], [81, [9...model problem (2.2) (2.3) is a classical case of the elliptic equation problem on a nonsmooth domain. The structure of this problem is well studied

  18. Second derivative in the model of classical binary system

    NASA Astrophysics Data System (ADS)

    Abubekerov, M. K.; Gostev, N. Yu.

    2016-06-01

    We have obtained an analytical expression for the second derivatives of the light curve with respect to geometric parameters in the model of eclipsing classical binary systems. These expressions are essentially efficient algorithm to calculate the numerical values of these second derivatives for all physical values of geometric parameters. Knowledge of the values of second derivatives of the light curve at some point provides additional information about asymptotical behaviour of the function near this point and can significantly improve the search for the best-fitting light curve through the use of second-order optimization method. We write the expression for the second derivatives in a form which is most compact and uniform for all values of the geometric parameters and so make it easy to write a computer program to calculate the values of these derivatives.

  19. Contributions on Optimizing Approximations in the Study of Melting and Solidification Processes That Occur in Processing by Electro-Erosion

    NASA Astrophysics Data System (ADS)

    Potra, F. L.; Potra, T.; Soporan, V. F.

    We propose two optimization methods of the processes which appear in EDM (Electrical Discharge Machining). First refers to the introduction of a new function approximating the thermal flux energy in EDM machine. Classical researches approximate this energy with the Gauss' function. In the case of unconventional technology the Gauss' bell became null only for r → +∞, where r is the radius of crater produced by EDM. We introduce a cubic spline regression which descends to zero at the crater's boundary. In the second optimization we propose modifications in technologies' work regarding the displacement of the tool electrode to the piece electrode such that the material melting to be realized in optimal time and the feeding speed with dielectric liquid regarding the solidification of the expulsed material. This we realize using the FAHP algorithm based on the theory of eigenvalues and eigenvectors, which lead to mean values of best approximation. [6

  20. Constrained variational calculus for higher order classical field theories

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; de León, Manuel; Martín de Diego, David

    2010-11-01

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  1. Optimal sensors placement and spillover suppression

    NASA Astrophysics Data System (ADS)

    Hanis, Tomas; Hromcik, Martin

    2012-04-01

    A new approach to optimal placement of sensors (OSP) in mechanical structures is presented. In contrast to existing methods, the presented procedure enables a designer to seek for a trade-off between the presence of desirable modes in captured measurements and the elimination of influence of those mode shapes that are not of interest in a given situation. An efficient numerical algorithm is presented, developed from an existing routine based on the Fischer information matrix analysis. We consider two requirements in the optimal sensor placement procedure. On top of the classical EFI approach, the sensors configuration should also minimize spillover of unwanted higher modes. We use the information approach to OSP, based on the effective independent method (EFI), and modify the underlying criterion to meet both of our requirements—to maximize useful signals and minimize spillover of unwanted modes at the same time. Performance of our approach is demonstrated by means of examples, and a flexible Blended Wing Body (BWB) aircraft case study related to a running European-level FP7 research project 'ACFA 2020—Active Control for Flexible Aircraft'.

  2. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  3. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  4. Necessary and sufficient optimality conditions for classical simulations of quantum communication processes

    NASA Astrophysics Data System (ADS)

    Montina, Alberto; Wolf, Stefan

    2014-07-01

    We consider the process consisting of preparation, transmission through a quantum channel, and subsequent measurement of quantum states. The communication complexity of the channel is the minimal amount of classical communication required for classically simulating it. Recently, we reduced the computation of this quantity to a convex minimization problem with linear constraints. Every solution of the constraints provides an upper bound on the communication complexity. In this paper, we derive the dual maximization problem of the original one. The feasible points of the dual constraints, which are inequalities, give lower bounds on the communication complexity, as illustrated with an example. The optimal values of the two problems turn out to be equal (zero duality gap). By this property, we provide necessary and sufficient conditions for optimality in terms of a set of equalities and inequalities. We use these conditions and two reasonable but unproven hypotheses to derive the lower bound n ×2n -1 for a noiseless quantum channel with capacity equal to n qubits. This lower bound can have interesting consequences in the context of the recent debate on the reality of the quantum state.

  5. The theory of variational hybrid quantum-classical algorithms

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod R.; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán

    2016-02-01

    Many quantum algorithms have daunting resource requirements when compared to what is available today. To address this discrepancy, a quantum-classical hybrid optimization scheme known as ‘the quantum variational eigensolver’ was developed (Peruzzo et al 2014 Nat. Commun. 5 4213) with the philosophy that even minimal quantum resources could be made useful when used in conjunction with classical routines. In this work we extend the general theory of this algorithm and suggest algorithmic improvements for practical implementations. Specifically, we develop a variational adiabatic ansatz and explore unitary coupled cluster where we establish a connection from second order unitary coupled cluster to universal gate sets through a relaxation of exponential operator splitting. We introduce the concept of quantum variational error suppression that allows some errors to be suppressed naturally in this algorithm on a pre-threshold quantum device. Additionally, we analyze truncation and correlated sampling in Hamiltonian averaging as ways to reduce the cost of this procedure. Finally, we show how the use of modern derivative free optimization techniques can offer dramatic computational savings of up to three orders of magnitude over previously used optimization techniques.

  6. Predictive optimized adaptive PSS in a single machine infinite bus.

    PubMed

    Milla, Freddy; Duarte-Mermoud, Manuel A

    2016-07-01

    Power System Stabilizer (PSS) devices are responsible for providing a damping torque component to generators for reducing fluctuations in the system caused by small perturbations. A Predictive Optimized Adaptive PSS (POA-PSS) to improve the oscillations in a Single Machine Infinite Bus (SMIB) power system is discussed in this paper. POA-PSS provides the optimal design parameters for the classic PSS using an optimization predictive algorithm, which adapts to changes in the inputs of the system. This approach is part of small signal stability analysis, which uses equations in an incremental form around an operating point. Simulation studies on the SMIB power system illustrate that the proposed POA-PSS approach has better performance than the classical PSS. In addition, the effort in the control action of the POA-PSS is much less than that of other approaches considered for comparison. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Factorization and the synthesis of optimal feedback kernels for differential-delay systems

    NASA Technical Reports Server (NTRS)

    Milman, Mark M.; Scheid, Robert E.

    1987-01-01

    A combination of ideas from the theories of operator Riccati equations and Volterra factorizations leads to the derivation of a novel, relatively simple set of hyperbolic equations which characterize the optimal feedback kernel for the finite-time regulator problem for autonomous differential-delay systems. Analysis of these equations elucidates the underlying structure of the feedback kernel and leads to the development of fast and accurate numerical methods for its computation. Unlike traditional formulations based on the operator Riccati equation, the gain is characterized by means of classical solutions of the derived set of equations. This leads to the development of approximation schemes which are analogous to what has been accomplished for systems of ordinary differential equations with given initial conditions.

  8. A Single-Lap Joint Adhesive Bonding Optimization Method Using Gradient and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Smeltzer, Stanley S., III; Finckenor, Jeffrey L.

    1999-01-01

    A natural process for any engineer, scientist, educator, etc. is to seek the most efficient method for accomplishing a given task. In the case of structural design, an area that has a significant impact on the structural efficiency is joint design. Unless the structure is machined from a solid block of material, the individual components which compose the overall structure must be joined together. The method for joining a structure varies depending on the applied loads, material, assembly and disassembly requirements, service life, environment, etc. Using both metallic and fiber reinforced plastic materials limits the user to two methods or a combination of these methods for joining the components into one structure. The first is mechanical fastening and the second is adhesive bonding. Mechanical fastening is by far the most popular joining technique; however, in terms of structural efficiency, adhesive bonding provides a superior joint since the load is distributed uniformly across the joint. The purpose of this paper is to develop a method for optimizing single-lap joint adhesive bonded structures using both gradient and genetic algorithms and comparing the solution process for each method. The goal of the single-lap joint optimization is to find the most efficient structure that meets the imposed requirements while still remaining as lightweight, economical, and reliable as possible. For the single-lap joint, an optimum joint is determined by minimizing the weight of the overall joint based on constraints from adhesive strengths as well as empirically derived rules. The analytical solution of the sin-le-lap joint is determined using the classical Goland-Reissner technique for case 2 type adhesive joints. Joint weight minimization is achieved using a commercially available routine, Design Optimization Tool (DOT), for the gradient solution while an author developed method is used for the genetic algorithm solution. Results illustrate the critical design variables as a function of adhesive properties and convergences of different joints based on the two optimization methods.

  9. Optimization of lattice surgery is NP-hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon J.

    2017-09-01

    The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.

  10. Optimization of topological quantum algorithms using Lattice Surgery is hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon

    The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.

  11. DNP enhanced NMR with flip-back recovery

    NASA Astrophysics Data System (ADS)

    Björgvinsdóttir, Snædís; Walder, Brennan J.; Pinon, Arthur C.; Yarava, Jayasubba Reddy; Emsley, Lyndon

    2018-03-01

    DNP methods can provide significant sensitivity enhancements in magic angle spinning solid-state NMR, but in systems with long polarization build up times long recycling periods are required to optimize sensitivity. We show how the sensitivity of such experiments can be improved by the classic flip-back method to recover bulk proton magnetization following continuous wave proton heteronuclear decoupling. Experiments were performed on formulations with characteristic build-up times spanning two orders of magnitude: a bulk BDPA radical doped o-terphenyl glass and microcrystalline samples of theophylline, L-histidine monohydrochloride monohydrate, and salicylic acid impregnated by incipient wetness. For these systems, addition of flip-back is simple, improves the sensitivity beyond that provided by modern heteronuclear decoupling methods such as SPINAL-64, and provides optimal sensitivity at shorter recycle delays. We show how to acquire DNP enhanced 2D refocused CP-INADEQUATE spectra with flip-back recovery, and demonstrate that the flip-back recovery method is particularly useful in rapid recycling regimes. We also report Overhauser effect DNP enhancements of over 70 at 592.6 GHz/900 MHz.

  12. Geometrical optimization of the transmission and dispersion properties of arrayed waveguide gratings using two stigmatic point mountings.

    PubMed

    Muñoz, P; Pastor, D; Capmany, J; Martínez, A

    2003-09-22

    In this paper, the procedure to optimize flat-top Arrayed Waveguide Grating (AWG) devices in terms of transmission and dispersion properties is presented. The systematic procedure consists on the stigmatization and minimization of the Light Path Function (LPF) used in classic planar spectrograph theory. The resulting geometry arrangement for the Arrayed Waveguides (AW) and the Output Waveguides (OW) is not the classical Rowland mounting, but an arbitrary geometry arrangement. Simulation using previous published enhanced modeling show how this geometry reduces the passband ripple, asymmetry and dispersion, in a design example.

  13. Asset Allocation and Optimal Contract for Delegated Portfolio Management

    NASA Astrophysics Data System (ADS)

    Liu, Jingjun; Liang, Jianfeng

    This article studies the portfolio selection and the contracting problems between an individual investor and a professional portfolio manager in a discrete-time principal-agent framework. Portfolio selection and optimal contracts are obtained in closed form. The optimal contract was composed with the fixed fee, the cost, and the fraction of excess expected return. The optimal portfolio is similar to the classical two-fund separation theorem.

  14. New Method to Prepare Mitomycin C Loaded PLA-Nanoparticles with High Drug Entrapment Efficiency

    NASA Astrophysics Data System (ADS)

    Hou, Zhenqing; Wei, Heng; Wang, Qian; Sun, Qian; Zhou, Chunxiao; Zhan, Chuanming; Tang, Xiaolong; Zhang, Qiqing

    2009-07-01

    The classical utilized double emulsion solvent diffusion technique for encapsulating water soluble Mitomycin C (MMC) in PLA nanoparticles suffers from low encapsulation efficiency because of the drug rapid partitioning to the external aqueous phase. In this paper, MMC loaded PLA nanoparticles were prepared by a new single emulsion solvent evaporation method, in which soybean phosphatidylcholine (SPC) was employed to improve the liposolubility of MMC by formation of MMC-SPC complex. Four main influential factors based on the results of a single-factor test, namely, PLA molecular weight, ratio of PLA to SPC (wt/wt) and MMC to SPC (wt/wt), volume ratio of oil phase to water phase, were evaluated using an orthogonal design with respect to drug entrapment efficiency. The drug release study was performed in pH 7.2 PBS at 37 °C with drug analysis using UV/vis spectrometer at 365 nm. MMC-PLA particles prepared by classical method were used as comparison. The formulated MMC-SPC-PLA nanoparticles under optimized condition are found to be relatively uniform in size (594 nm) with up to 94.8% of drug entrapment efficiency compared to 6.44 μm of PLA-MMC microparticles with 34.5% of drug entrapment efficiency. The release of MMC shows biphasic with an initial burst effect, followed by a cumulated drug release over 30 days is 50.17% for PLA-MMC-SPC nanoparticles, and 74.1% for PLA-MMC particles. The IR analysis of MMC-SPC complex shows that their high liposolubility may be attributed to some weak physical interaction between MMC and SPC during the formation of the complex. It is concluded that the new method is advantageous in terms of smaller size, lower size distribution, higher encapsulation yield, and longer sustained drug release in comparison to classical method.

  15. Implementation of quantum game theory simulations using Python

    NASA Astrophysics Data System (ADS)

    Madrid S., A.

    2013-05-01

    This paper provides some examples about quantum games simulated in Python's programming language. The quantum games have been developed with the Sympy Python library, which permits solving quantum problems in a symbolic form. The application of these methods of quantum mechanics to game theory gives us more possibility to achieve results not possible before. To illustrate the results of these methods, in particular, there have been simulated the quantum battle of the sexes, the prisoner's dilemma and card games. These solutions are able to exceed the classic bottle neck and obtain optimal quantum strategies. In this form, python demonstrated that is possible to do more advanced and complicated quantum games algorithms.

  16. Large scale exact quantum dynamics calculations: Ten thousand quantum states of acetonitrile

    NASA Astrophysics Data System (ADS)

    Halverson, Thomas; Poirier, Bill

    2015-03-01

    'Exact' quantum dynamics (EQD) calculations of the vibrational spectrum of acetonitrile (CH3CN) are performed, using two different methods: (1) phase-space-truncated momentum-symmetrized Gaussian basis and (2) correlated truncated harmonic oscillator basis. In both cases, a simple classical phase space picture is used to optimize the selection of individual basis functions-leading to drastic reductions in basis size, in comparison with existing methods. Massive parallelization is also employed. Together, these tools-implemented into a single, easy-to-use computer code-enable a calculation of tens of thousands of vibrational states of CH3CN to an accuracy of 0.001-10 cm-1.

  17. Optimization of Blended Wing Body Composite Panels Using Both NASTRAN and Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Lovejoy, Andrew E.

    2006-01-01

    The blended wing body (BWB) is a concept that has been investigated for improving the performance of transport aircraft. A trade study was conducted by evaluating four regions from a BWB design characterized by three fuselage bays and a 400,000 lb. gross take-off weight (GTW). This report describes the structural optimization of these regions via computational analysis and compares them to the baseline designs of the same construction. The identified regions were simplified for use in the optimization. The regions were represented by flat panels having appropriate classical boundary conditions and uniform force resultants along the panel edges. Panel-edge tractions and internal pressure values applied during the study were those determined by nonlinear NASTRAN analyses. Only one load case was considered in the optimization analysis for each panel region. Optimization was accomplished using both NASTRAN solution 200 and Genetic Algorithm (GA), with constraints imposed on stress, buckling, and minimum thicknesses. The NASTRAN optimization analyses often resulted in infeasible solutions due to violation of the constraints, whereas the GA enforced satisfaction of the constraints and, therefore, always ensured a feasible solution. However, both optimization methods encountered difficulties when the number of design variables was increased. In general, the optimized panels weighed less than the comparable baseline panels.

  18. Inverse Planning Approach for 3-D MRI-Based Pulse-Dose Rate Intracavitary Brachytherapy in Cervix Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chajon, Enrique; Dumas, Isabelle; Touleimat, Mahmoud B.Sc.

    2007-11-01

    Purpose: The purpose of this study was to evaluate the inverse planning simulated annealing (IPSA) software for the optimization of dose distribution in patients with cervix carcinoma treated with MRI-based pulsed-dose rate intracavitary brachytherapy. Methods and Materials: Thirty patients treated with a technique using a customized vaginal mold were selected. Dose-volume parameters obtained using the IPSA method were compared with the classic manual optimization method (MOM). Target volumes and organs at risk were delineated according to the Gynecological Brachytherapy Group/European Society for Therapeutic Radiology and Oncology recommendations. Because the pulsed dose rate program was based on clinical experience with lowmore » dose rate, dwell time values were required to be as homogeneous as possible. To achieve this goal, different modifications of the IPSA program were applied. Results: The first dose distribution calculated by the IPSA algorithm proposed a heterogeneous distribution of dwell time positions. The mean D90, D100, and V100 calculated with both methods did not differ significantly when the constraints were applied. For the bladder, doses calculated at the ICRU reference point derived from the MOM differed significantly from the doses calculated by the IPSA method (mean, 58.4 vs. 55 Gy respectively; p = 0.0001). For the rectum, the doses calculated at the ICRU reference point were also significantly lower with the IPSA method. Conclusions: The inverse planning method provided fast and automatic solutions for the optimization of dose distribution. However, the straightforward use of IPSA generated significant heterogeneity in dwell time values. Caution is therefore recommended in the use of inverse optimization tools with clinical relevance study of new dosimetric rules.« less

  19. Hidden Statistics Approach to Quantum Simulations

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2010-01-01

    Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.

  20. Experimental quantum annealing: case study involving the graph isomorphism problem.

    PubMed

    Zick, Kenneth M; Shehab, Omar; French, Matthew

    2015-06-08

    Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N(2) to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers.

  1. Experimental quantum annealing: case study involving the graph isomorphism problem

    PubMed Central

    Zick, Kenneth M.; Shehab, Omar; French, Matthew

    2015-01-01

    Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N2 to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers. PMID:26053973

  2. Improving Zernike moments comparison for optimal similarity and rotation angle retrieval.

    PubMed

    Revaud, Jérôme; Lavoué, Guillaume; Baskurt, Atilla

    2009-04-01

    Zernike moments constitute a powerful shape descriptor in terms of robustness and description capability. However the classical way of comparing two Zernike descriptors only takes into account the magnitude of the moments and loses the phase information. The novelty of our approach is to take advantage of the phase information in the comparison process while still preserving the invariance to rotation. This new Zernike comparator provides a more accurate similarity measure together with the optimal rotation angle between the patterns, while keeping the same complexity as the classical approach. This angle information is particularly of interest for many applications, including 3D scene understanding through images. Experiments demonstrate that our comparator outperforms the classical one in terms of similarity measure. In particular the robustness of the retrieval against noise and geometric deformation is greatly improved. Moreover, the rotation angle estimation is also more accurate than state-of-the-art algorithms.

  3. TINKTEP: A fully self-consistent, mutually polarizable QM/MM approach based on the AMOEBA force field

    NASA Astrophysics Data System (ADS)

    Dziedzic, Jacek; Mao, Yuezhi; Shao, Yihan; Ponder, Jay; Head-Gordon, Teresa; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-09-01

    We present a novel quantum mechanical/molecular mechanics (QM/MM) approach in which a quantum subsystem is coupled to a classical subsystem described by the AMOEBA polarizable force field. Our approach permits mutual polarization between the QM and MM subsystems, effected through multipolar electrostatics. Self-consistency is achieved for both the QM and MM subsystems through a total energy minimization scheme. We provide an expression for the Hamiltonian of the coupled QM/MM system, which we minimize using gradient methods. The QM subsystem is described by the onetep linear-scaling DFT approach, which makes use of strictly localized orbitals expressed in a set of periodic sinc basis functions equivalent to plane waves. The MM subsystem is described by the multipolar, polarizable force field AMOEBA, as implemented in tinker. Distributed multipole analysis is used to obtain, on the fly, a classical representation of the QM subsystem in terms of atom-centered multipoles. This auxiliary representation is used for all polarization interactions between QM and MM, allowing us to treat them on the same footing as in AMOEBA. We validate our method in tests of solute-solvent interaction energies, for neutral and charged molecules, demonstrating the simultaneous optimization of the quantum and classical degrees of freedom. Encouragingly, we find that the inclusion of explicit polarization in the MM part of QM/MM improves the agreement with fully QM calculations.

  4. A Parallel and Incremental Approach for Data-Intensive Learning of Bayesian Networks.

    PubMed

    Yue, Kun; Fang, Qiyu; Wang, Xiaoling; Li, Jin; Liu, Weiyi

    2015-12-01

    Bayesian network (BN) has been adopted as the underlying model for representing and inferring uncertain knowledge. As the basis of realistic applications centered on probabilistic inferences, learning a BN from data is a critical subject of machine learning, artificial intelligence, and big data paradigms. Currently, it is necessary to extend the classical methods for learning BNs with respect to data-intensive computing or in cloud environments. In this paper, we propose a parallel and incremental approach for data-intensive learning of BNs from massive, distributed, and dynamically changing data by extending the classical scoring and search algorithm and using MapReduce. First, we adopt the minimum description length as the scoring metric and give the two-pass MapReduce-based algorithms for computing the required marginal probabilities and scoring the candidate graphical model from sample data. Then, we give the corresponding strategy for extending the classical hill-climbing algorithm to obtain the optimal structure, as well as that for storing a BN by pairs. Further, in view of the dynamic characteristics of the changing data, we give the concept of influence degree to measure the coincidence of the current BN with new data, and then propose the corresponding two-pass MapReduce-based algorithms for BNs incremental learning. Experimental results show the efficiency, scalability, and effectiveness of our methods.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodríguez-Cantano, Rocío; Pérez de Tudela, Ricardo; Bartolomei, Massimiliano

    Coronene-doped helium clusters have been studied by means of classical and quantum mechanical (QM) methods using a recently developed He–C{sub 24}H{sub 12} global potential based on the use of optimized atom-bond improved Lennard-Jones functions. Equilibrium energies and geometries at global and local minima for systems with up to 69 He atoms were calculated by means of an evolutive algorithm and a basin-hopping approach and compared with results from path integral Monte Carlo (PIMC) calculations at 2 K. A detailed analysis performed for the smallest sizes shows that the precise localization of the He atoms forming the first solvation layer overmore » the molecular substrate is affected by differences between relative potential minima. The comparison of the PIMC results with the predictions from the classical approaches and with diffusion Monte Carlo results allows to examine the importance of both the QM and thermal effects.« less

  6. Quantum simulator review

    NASA Astrophysics Data System (ADS)

    Bednar, Earl; Drager, Steven L.

    2007-04-01

    Quantum information processing's objective is to utilize revolutionary computing capability based on harnessing the paradigm shift offered by quantum computing to solve classically hard and computationally challenging problems. Some of our computationally challenging problems of interest include: the capability for rapid image processing, rapid optimization of logistics, protecting information, secure distributed simulation, and massively parallel computation. Currently, one important problem with quantum information processing is that the implementation of quantum computers is difficult to realize due to poor scalability and great presence of errors. Therefore, we have supported the development of Quantum eXpress and QuIDD Pro, two quantum computer simulators running on classical computers for the development and testing of new quantum algorithms and processes. This paper examines the different methods used by these two quantum computing simulators. It reviews both simulators, highlighting each simulators background, interface, and special features. It also demonstrates the implementation of current quantum algorithms on each simulator. It concludes with summary comments on both simulators.

  7. Superior memory efficiency of quantum devices for the simulation of continuous-time stochastic processes

    NASA Astrophysics Data System (ADS)

    Elliott, Thomas J.; Gu, Mile

    2018-03-01

    Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.

  8. Local multiplicative Schwarz algorithms for convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  9. Entanglement-Based Machine Learning on a Quantum Computer

    NASA Astrophysics Data System (ADS)

    Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W.

    2015-03-01

    Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing "big data" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning.

  10. Stochastic gradient ascent outperforms gamers in the Quantum Moves game

    NASA Astrophysics Data System (ADS)

    Sels, Dries

    2018-04-01

    In a recent work on quantum state preparation, Sørensen and co-workers [Nature (London) 532, 210 (2016), 10.1038/nature17620] explore the possibility of using video games to help design quantum control protocols. The authors present a game called "Quantum Moves" (https://www.scienceathome.org/games/quantum-moves/) in which gamers have to move an atom from A to B by means of optical tweezers. They report that, "players succeed where purely numerical optimization fails." Moreover, by harnessing the player strategies, they can "outperform the most prominent established numerical methods." The aim of this Rapid Communication is to analyze the problem in detail and show that those claims are untenable. In fact, without any prior knowledge and starting from a random initial seed, a simple stochastic local optimization method finds near-optimal solutions which outperform all players. Counterdiabatic driving can even be used to generate protocols without resorting to numeric optimization. The analysis results in an accurate analytic estimate of the quantum speed limit which, apart from zero-point motion, is shown to be entirely classical in nature. The latter might explain why gamers are reasonably good at the game. A simple modification of the BringHomeWater challenge is proposed to test this hypothesis.

  11. Springback effects during single point incremental forming: Optimization of the tool path

    NASA Astrophysics Data System (ADS)

    Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick

    2018-05-01

    Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.

  12. Arbitrary-order Hilbert Spectral Analysis and Intermittency in Solar Wind Density Fluctuations

    NASA Astrophysics Data System (ADS)

    Carbone, Francesco; Sorriso-Valvo, Luca; Alberti, Tommaso; Lepreti, Fabio; Chen, Christopher H. K.; Němeček, Zdenek; Šafránková, Jana

    2018-05-01

    The properties of inertial- and kinetic-range solar wind turbulence have been investigated with the arbitrary-order Hilbert spectral analysis method, applied to high-resolution density measurements. Due to the small sample size and to the presence of strong nonstationary behavior and large-scale structures, the classical analysis in terms of structure functions may prove to be unsuccessful in detecting the power-law behavior in the inertial range, and may underestimate the scaling exponents. However, the Hilbert spectral method provides an optimal estimation of the scaling exponents, which have been found to be close to those for velocity fluctuations in fully developed hydrodynamic turbulence. At smaller scales, below the proton gyroscale, the system loses its intermittent multiscaling properties and converges to a monofractal process. The resulting scaling exponents, obtained at small scales, are in good agreement with those of classical fractional Brownian motion, indicating a long-term memory in the process, and the absence of correlations around the spectral-break scale. These results provide important constraints on models of kinetic-range turbulence in the solar wind.

  13. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.

  14. Phase unwrapping using region-based markov random field model.

    PubMed

    Dong, Ying; Ji, Jim

    2010-01-01

    Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.

  15. Quantum-classical correspondence in the vicinity of periodic orbits

    NASA Astrophysics Data System (ADS)

    Kumari, Meenu; Ghose, Shohini

    2018-05-01

    Quantum-classical correspondence in chaotic systems is a long-standing problem. We describe a method to quantify Bohr's correspondence principle and calculate the size of quantum numbers for which we can expect to observe quantum-classical correspondence near periodic orbits of Floquet systems. Our method shows how the stability of classical periodic orbits affects quantum dynamics. We demonstrate our method by analyzing quantum-classical correspondence in the quantum kicked top (QKT), which exhibits both regular and chaotic behavior. We use our correspondence conditions to identify signatures of classical bifurcations even in a deep quantum regime. Our method can be used to explain the breakdown of quantum-classical correspondence in chaotic systems.

  16. Estimating Tree Height-Diameter Models with the Bayesian Method

    PubMed Central

    Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733

  17. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  18. Classical methods and modern analysis for studying fungal diversity

    Treesearch

    John Paul Schmit

    2005-01-01

    In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...

  19. Classical Methods and Modern Analysis for Studying Fungal Diversity

    Treesearch

    J. P. Schmit; D. J. Lodge

    2005-01-01

    In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...

  20. DNA elution from buccal cells stored on Whatman FTA Classic Cards using a modified methanol fixation method.

    PubMed

    Johanson, Helene C; Hyland, Valentine; Wicking, Carol; Sturm, Richard A

    2009-04-01

    We describe here a method for DNA elution from buccal cells and whole blood both collected onto Whatman FTA technology, using methanol fixation followed by an elution PCR program. Extracted DNA is comparable in quality to published Whatman FTA protocols, as judged by PCR-based genotyping. Elution of DNA from the dried sample is a known rate-limiting step in the published Whatman FTA protocol; this method enables the use of each 3-mm punch of sample for several PCR reactions instead of the standard, one PCR reaction per sample punch. This optimized protocol therefore extends the usefulness and cost effectiveness of each buccal swab sample collected, when used for nucleic acid PCR and genotyping.

  1. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  2. A fictitious domain finite element method for simulations of fluid-structure interactions: The Navier-Stokes equations coupled with a moving solid

    NASA Astrophysics Data System (ADS)

    Court, Sébastien; Fournié, Michel

    2015-05-01

    The paper extends a stabilized fictitious domain finite element method initially developed for the Stokes problem to the incompressible Navier-Stokes equations coupled with a moving solid. This method presents the advantage to predict an optimal approximation of the normal stress tensor at the interface. The dynamics of the solid is governed by the Newton's laws and the interface between the fluid and the structure is materialized by a level-set which cuts the elements of the mesh. An algorithm is proposed in order to treat the time evolution of the geometry and numerical results are presented on a classical benchmark of the motion of a disk falling in a channel.

  3. The interdependence between screening methods and screening libraries.

    PubMed

    Shelat, Anang A; Guy, R Kiplin

    2007-06-01

    The most common methods for discovery of chemical compounds capable of manipulating biological function involves some form of screening. The success of such screens is highly dependent on the chemical materials - commonly referred to as libraries - that are assayed. Classic methods for the design of screening libraries have depended on knowledge of target structure and relevant pharmacophores for target focus, and on simple count-based measures to assess other properties. The recent proliferation of two novel screening paradigms, structure-based screening and high-content screening, prompts a profound rethink about the ideal composition of small-molecule screening libraries. We suggest that currently utilized libraries are not optimal for addressing new targets by high-throughput screening, or complex phenotypes by high-content screening.

  4. Molecular comparison of cattle fever ticks from native and introduced ranges with insights into optimal search areas for classical biological control agents

    USDA-ARS?s Scientific Manuscript database

    Classical biological control using specialist parasitoids, predators and/or nematodes from the native ranges of cattle fever ticks could complement existing control strategies for this livestock pest in the transboundary region between Mexico and Texas. DNA fingerprinting tools were used to compare ...

  5. Dialogues in Performance: A Team-Taught Course on the Afterlife in the Classical and Italian Traditions

    ERIC Educational Resources Information Center

    Gosetti-Murrayjohn, Angela; Schneider, Federico

    2009-01-01

    This article provides a reflection on a team-teaching experience in which performative dialogues between co-instructors and among students provided a pedagogical framework within which comparative analysis of textual traditions within the classical tradition could be optimized. Performative dialogues thus provided a model for and enactment of…

  6. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  7. Feedback stabilization of an oscillating vertical cylinder by POD Reduced-Order Model

    NASA Astrophysics Data System (ADS)

    Tissot, Gilles; Cordier, Laurent; Noack, Bernd R.

    2015-01-01

    The objective is to demonstrate the use of reduced-order models (ROM) based on proper orthogonal decomposition (POD) to stabilize the flow over a vertically oscillating circular cylinder in the laminar regime (Reynolds number equal to 60). The 2D Navier-Stokes equations are first solved with a finite element method, in which the moving cylinder is introduced via an ALE method. Since in fluid-structure interaction, the POD algorithm cannot be applied directly, we implemented the fictitious domain method of Glowinski et al. [1] where the solid domain is treated as a fluid undergoing an additional constraint. The POD-ROM is classically obtained by projecting the Navier-Stokes equations onto the first POD modes. At this level, the cylinder displacement is enforced in the POD-ROM through the introduction of Lagrange multipliers. For determining the optimal vertical velocity of the cylinder, a linear quadratic regulator framework is employed. After linearization of the POD-ROM around the steady flow state, the optimal linear feedback gain is obtained as solution of a generalized algebraic Riccati equation. Finally, when the optimal feedback control is applied, it is shown that the flow converges rapidly to the steady state. In addition, a vanishing control is obtained proving the efficiency of the control approach.

  8. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  9. Towards the stabilization of the low density elements in topology optimization with large deformation

    NASA Astrophysics Data System (ADS)

    Lahuerta, Ricardo Doll; Simões, Eduardo T.; Campello, Eduardo M. B.; Pimenta, Paulo M.; Silva, Emilio C. N.

    2013-10-01

    This work addresses the treatment of lower density regions of structures undergoing large deformations during the design process by the topology optimization method (TOM) based on the finite element method. During the design process the nonlinear elastic behavior of the structure is based on exact kinematics. The material model applied in the TOM is based on the solid isotropic microstructure with penalization approach. No void elements are deleted and all internal forces of the nodes surrounding the void elements are considered during the nonlinear equilibrium solution. The distribution of design variables is solved through the method of moving asymptotes, in which the sensitivity of the objective function is obtained directly. In addition, a continuation function and a nonlinear projection function are invoked to obtain a checkerboard free and mesh independent design. 2D examples with both plane strain and plane stress conditions hypothesis are presented and compared. The problem of instability is overcome by adopting a polyconvex constitutive model in conjunction with a suggested relaxation function to stabilize the excessive distorted elements. The exact tangent stiffness matrix is used. The optimal topology results are compared to the results obtained by using the classical Saint Venant-Kirchhoff constitutive law, and strong differences are found.

  10. Information transmission in bosonic memory channels using Gaussian matrix-product states as near-optimal symbols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schäfer, Joachim; Karpov, Evgueni; Cerf, Nicolas J.

    2014-12-04

    We seek for a realistic implementation of multimode Gaussian entangled states that can realize the optimal encoding for quantum bosonic Gaussian channels with memory. For a Gaussian channel with classical additive Markovian correlated noise and a lossy channel with non-Markovian correlated noise, we demonstrate the usefulness using Gaussian matrix-product states (GMPS). These states can be generated sequentially, and may, in principle, approximate well any Gaussian state. We show that we can achieve up to 99.9% of the classical Gaussian capacity with GMPS requiring squeezing parameters that are reachable with current technology. This may offer a way towards an experimental realization.

  11. Helicopter Control Energy Reduction Using Moving Horizontal Tail

    PubMed Central

    Oktay, Tugrul; Sal, Firat

    2015-01-01

    Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841

  12. A review of biomedical multiphoton microscopy and its laser sources

    NASA Astrophysics Data System (ADS)

    Lefort, Claire

    2017-10-01

    Multiphoton microscopy (MPM) has been the subject of major development efforts for about 25 years for imaging biological specimens at micron scale and presented as an elegant alternative to classical fluorescence methods such as confocal microscopy. In this topical review, the main interests and technical requirements of MPM are addressed with a focus on the crucial role of excitation source for optimization of multiphoton processes. Then, an overview of the different sources successfully demonstrated in literature for MPM is presented, and their physical parameters are inventoried. A classification of these sources in function with their ability to optimize multiphoton processes is proposed, following a protocol found in literature. Starting from these considerations, a suggestion of a possible identikit of the ideal laser source for MPM concludes this topical review. Dedicated to Martin.

  13. Optimal staining methods for delineation of cortical areas and neuron counts in human brains.

    PubMed

    Uylings, H B; Zilles, K; Rajkowska, G

    1999-04-01

    For cytoarchitectonic delineation of cortical areas in human brain, the Gallyas staining for somata with its sharp contrast between cell bodies and neuropil is preferable to the classical Nissl staining, the more so when an image analysis system is used. This Gallyas staining, however, does not appear to be appropriate for counting neuron numbers in pertinent brain areas, due to the lack of distinct cytological features between small neurons and glial cells. For cell counting Nissl is preferable. In an optimal design for cell counting at least both the Gallyas and the Nissl staining must be applied, the former staining for cytoarchitectural delineaton of cortical areas and the latter for counting the number of neurons in the pertinent cortical areas. Copyright 1999 Academic Press.

  14. Optimal use of modern antibiotics: emerging trends.

    PubMed

    Polk, R

    1999-08-01

    Development of new antimicrobial drugs is an essential component in the effort to remain ahead of emerging microbial resistance. However, when new antibiotics are used with unrestrained enthusiasm, a predictable consequence is the further expansion of resistance. This problem is well known to the infectious diseases specialist and is increasingly appreciated by the nonspecialist and the public. A far more sensible strategy is to identify new ways to use these drugs to increase the duration of their usefulness. New methods to optimize antibiotic selection, dose, and duration of therapy are being investigated, and application of some of these strategies has been shown to have a favorable impact on resistance. Much of the classic thinking of how to use antibiotics is changing, and these newer strategies may result in prolongation of the era of the "antibiotic miracle."

  15. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  16. Using particle swarm optimization to enhance PI controller performances for active and reactive power control in wind energy conversion systems

    NASA Astrophysics Data System (ADS)

    Taleb, M.; Cherkaoui, M.; Hbib, M.

    2018-05-01

    Recently, renewable energy sources are impacting seriously power quality of the grids in term of frequency and voltage stability, due to their intermittence and less forecasting accuracy. Among these sources, wind energy conversion systems (WECS) received a great interest and especially the configuration with Doubly Fed Induction Generator. However, WECS strongly nonlinear, are making their control not easy by classical approaches such as a PI. In this paper, we continue deepen study of PI controller used in active and reactive power control of this kind of WECS. Particle Swarm Optimization (PSO) is suggested to improve its dynamic performances and its robustness against parameters variations. This work highlights the performances of PSO optimized PI control against classical PI tuned with poles compensation strategy. Simulations are carried out on MATLAB-SIMULINK software.

  17. An Optimized Transient Dual Luciferase Assay for Quantifying MicroRNA Directed Repression of Targeted Sequences

    PubMed Central

    Moyle, Richard L.; Carvalhais, Lilia C.; Pretorius, Lara-Simone; Nowak, Ekaterina; Subramaniam, Gayathery; Dalton-Morgan, Jessica; Schenk, Peer M.

    2017-01-01

    Studies investigating the action of small RNAs on computationally predicted target genes require some form of experimental validation. Classical molecular methods of validating microRNA action on target genes are laborious, while approaches that tag predicted target sequences to qualitative reporter genes encounter technical limitations. The aim of this study was to address the challenge of experimentally validating large numbers of computationally predicted microRNA-target transcript interactions using an optimized, quantitative, cost-effective, and scalable approach. The presented method combines transient expression via agroinfiltration of Nicotiana benthamiana leaves with a quantitative dual luciferase reporter system, where firefly luciferase is used to report the microRNA-target sequence interaction and Renilla luciferase is used as an internal standard to normalize expression between replicates. We report the appropriate concentration of N. benthamiana leaf extracts and dilution factor to apply in order to avoid inhibition of firefly LUC activity. Furthermore, the optimal ratio of microRNA precursor expression construct to reporter construct and duration of the incubation period post-agroinfiltration were determined. The optimized dual luciferase assay provides an efficient, repeatable and scalable method to validate and quantify microRNA action on predicted target sequences. The optimized assay was used to validate five predicted targets of rice microRNA miR529b, with as few as six technical replicates. The assay can be extended to assess other small RNA-target sequence interactions, including assessing the functionality of an artificial miRNA or an RNAi construct on a targeted sequence. PMID:28979287

  18. Computer object segmentation by nonlinear image enhancement, multidimensional clustering, and geometrically constrained contour optimization

    NASA Astrophysics Data System (ADS)

    Bruynooghe, Michel M.

    1998-04-01

    In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.

  19. Network clustering and community detection using modulus of families of loops.

    PubMed

    Shakeri, Heman; Poggi-Corradini, Pietro; Albin, Nathan; Scoglio, Caterina

    2017-01-01

    We study the structure of loops in networks using the notion of modulus of loop families. We introduce an alternate measure of network clustering by quantifying the richness of families of (simple) loops. Modulus tries to minimize the expected overlap among loops by spreading the expected link usage optimally. We propose weighting networks using these expected link usages to improve classical community detection algorithms. We show that the proposed method enhances the performance of certain algorithms, such as spectral partitioning and modularity maximization heuristics, on standard benchmarks.

  20. Portfolio selection and asset pricing under a benchmark approach

    NASA Astrophysics Data System (ADS)

    Platen, Eckhard

    2006-10-01

    The paper presents classical and new results on portfolio optimization, as well as the fair pricing concept for derivative pricing under the benchmark approach. The growth optimal portfolio is shown to be a central object in a market model. It links asset pricing and portfolio optimization. The paper argues that the market portfolio is a proxy of the growth optimal portfolio. By choosing the drift of the discounted growth optimal portfolio as parameter process, one obtains a realistic theoretical market dynamics.

  1. Integrated analysis and design of thick composite structures for optimal passive damping characteristics

    NASA Technical Reports Server (NTRS)

    Saravanos, D. A.

    1993-01-01

    The development of novel composite mechanics for the analysis of damping in composite laminates and structures and the more significant results of this effort are summarized. Laminate mechanics based on piecewise continuous in-plane displacement fields are described that can represent both intralaminar stresses and interlaminar shear stresses and the associated effects on the stiffness and damping characteristics of a composite laminate. Among other features, the mechanics can accurately model the static and damped dynamic response of either thin or thick composite laminates, as well as, specialty laminates with embedded compliant damping layers. The discrete laminate damping theory is further incorporated into structural analysis methods. In this context, an exact semi-analytical method for the simulation of the damped dynamic response of composite plates was developed. A finite element based method and a specialty four-node plate element were also developed for the analysis of composite structures of variable shape and boundary conditions. Numerous evaluations and applications demonstrate the quality and superiority of the mechanics in predicting the damped dynamic characteristics of composite structures. Finally, additional development was focused on the development of optimal tailoring methods for the design of thick composite structures based on the developed analytical capability. Applications on composite plates illustrated the influence of composite mechanics in the optimal design of composites and the potential for significant deviations in the resultant designs when more simplified (classical) laminate theories are used.

  2. Verification of Ceramic Structures

    NASA Astrophysics Data System (ADS)

    Behar-Lafenetre, Stephanie; Cornillon, Laurence; Rancurel, Michael; De Graaf, Dennis; Hartmann, Peter; Coe, Graham; Laine, Benoit

    2012-07-01

    In the framework of the “Mechanical Design and Verification Methodologies for Ceramic Structures” contract [1] awarded by ESA, Thales Alenia Space has investigated literature and practices in affiliated industries to propose a methodological guideline for verification of ceramic spacecraft and instrument structures. It has been written in order to be applicable to most types of ceramic or glass-ceramic materials - typically Cesic®, HBCesic®, Silicon Nitride, Silicon Carbide and ZERODUR®. The proposed guideline describes the activities to be performed at material level in order to cover all the specific aspects of ceramics (Weibull distribution, brittle behaviour, sub-critical crack growth). Elementary tests and their post-processing methods are described, and recommendations for optimization of the test plan are given in order to have a consistent database. The application of this method is shown on an example in a dedicated article [7]. Then the verification activities to be performed at system level are described. This includes classical verification activities based on relevant standard (ECSS Verification [4]), plus specific analytical, testing and inspection features. The analysis methodology takes into account the specific behaviour of ceramic materials, especially the statistical distribution of failures (Weibull) and the method to transfer it from elementary data to a full-scale structure. The demonstration of the efficiency of this method is described in a dedicated article [8]. The verification is completed by classical full-scale testing activities. Indications about proof testing, case of use and implementation are given and specific inspection and protection measures are described. These additional activities are necessary to ensure the required reliability. The aim of the guideline is to describe how to reach the same reliability level as for structures made of more classical materials (metals, composites).

  3. Nitrogen vacancy, self-interstitial diffusion, and Frenkel-pair formation/dissociation in B 1 TiN studied by ab initio and classical molecular dynamics with optimized potentials

    NASA Astrophysics Data System (ADS)

    Sangiovanni, D. G.; Alling, B.; Steneteg, P.; Hultman, L.; Abrikosov, I. A.

    2015-02-01

    We use ab initio and classical molecular dynamics (AIMD and CMD) based on the modified embedded-atom method (MEAM) potential to simulate diffusion of N vacancy and N self-interstitial point defects in B 1 TiN. TiN MEAM parameters are optimized to obtain CMD nitrogen point-defect jump rates in agreement with AIMD predictions, as well as an excellent description of Ti Nx(˜0.7

  4. Optimal discrimination of M coherent states with a small quantum computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Marcus P. da; Guha, Saikat; Dutton, Zachary

    2014-12-04

    The ability to distinguish between coherent states optimally plays in important role in the efficient usage of quantum resources for classical communication and sensing applications. While it has been known since the early 1970’s how to optimally distinguish between two coherent states, generalizations to larger sets of coherent states have so far failed to reach optimality. In this work we outline how optimality can be achieved by using a small quantum computer, building on recent proposals for optimal qubit state discrimination with multiple copies.

  5. Toward simulating complex systems with quantum effects

    NASA Astrophysics Data System (ADS)

    Kenion-Hanrath, Rachel Lynn

    Quantum effects like tunneling, coherence, and zero point energy often play a significant role in phenomena on the scales of atoms and molecules. However, the exact quantum treatment of a system scales exponentially with dimensionality, making it impractical for characterizing reaction rates and mechanisms in complex systems. An ongoing effort in the field of theoretical chemistry and physics is extending scalable, classical trajectory-based simulation methods capable of capturing quantum effects to describe dynamic processes in many-body systems; in the work presented here we explore two such techniques. First, we detail an explicit electron, path integral (PI)-based simulation protocol for predicting the rate of electron transfer in condensed-phase transition metal complex systems. Using a PI representation of the transferring electron and a classical representation of the transition metal complex and solvent atoms, we compute the outer sphere free energy barrier and dynamical recrossing factor of the electron transfer rate while accounting for quantum tunneling and zero point energy effects. We are able to achieve this employing only a single set of force field parameters to describe the system rather than parameterizing along the reaction coordinate. Following our success in describing a simple model system, we discuss our next steps in extending our protocol to technologically relevant materials systems. The latter half focuses on the Mixed Quantum-Classical Initial Value Representation (MQC-IVR) of real-time correlation functions, a semiclassical method which has demonstrated its ability to "tune'' between quantum- and classical-limit correlation functions while maintaining dynamic consistency. Specifically, this is achieved through a parameter that determines the quantumness of individual degrees of freedom. Here, we derive a semiclassical correction term for the MQC-IVR to systematically characterize the error introduced by different choices of simulation parameters, and demonstrate the ability of this approach to optimize MQC-IVR simulations.

  6. Quasi-classical approaches to vibronic spectra revisited

    NASA Astrophysics Data System (ADS)

    Karsten, Sven; Ivanov, Sergei D.; Bokarev, Sergey I.; Kühn, Oliver

    2018-03-01

    The framework to approach quasi-classical dynamics in the electronic ground state is well established and is based on the Kubo-transformed time correlation function (TCF), being the most classical-like quantum TCF. Here we discuss whether the choice of the Kubo-transformed TCF as a starting point for simulating vibronic spectra is as unambiguous as it is for vibrational ones. Employing imaginary-time path integral techniques in combination with the interaction representation allowed us to formulate a method for simulating vibronic spectra in the adiabatic regime that takes nuclear quantum effects and dynamics on multiple potential energy surfaces into account. Further, a generalized quantum TCF is proposed that contains many well-established TCFs, including the Kubo one, as particular cases. Importantly, it also provides a framework to construct new quantum TCFs. Applying the developed methodology to the generalized TCF leads to a plethora of simulation protocols, which are based on the well-known TCFs as well as on new ones. Their performance is investigated on 1D anharmonic model systems at finite temperatures. It is shown that the protocols based on the new TCFs may lead to superior results with respect to those based on the common ones. The strategies to find the optimal approach are discussed.

  7. Elementary test for nonclassicality based on measurements of position and momentum

    NASA Astrophysics Data System (ADS)

    Fresta, Luca; Borregaard, Johannes; Sørensen, Anders S.

    2015-12-01

    We generalize a nonclassicality test described by Kot et al. [Phys. Rev. Lett. 108, 233601 (2012), 10.1103/PhysRevLett.108.233601], which can be used to rule out any classical description of a physical system. The test is based on measurements of quadrature operators and works by proving a contradiction with the classical description in terms of a probability distribution in phase space. As opposed to the previous work, we generalize the test to include states without rotational symmetry in phase space. Furthermore, we compare the performance of the nonclassicality test with classical tomography methods based on the inverse Radon transform, which can also be used to establish the quantum nature of a physical system. In particular, we consider a nonclassicality test based on the so-called filtered back-projection formula. We show that the general nonclassicality test is conceptually simpler, requires less assumptions on the system, and is statistically more reliable than the tests based on the filtered back-projection formula. As a specific example, we derive the optimal test for quadrature squeezed single-photon states and show that the efficiency of the test does not change with the degree of squeezing.

  8. Multivariate constrained shape optimization: Application to extrusion bell shape for pasta production

    NASA Astrophysics Data System (ADS)

    Sarghini, Fabrizio; De Vivo, Angela; Marra, Francesco

    2017-10-01

    Computational science and engineering methods have allowed a major change in the way products and processes are designed, as validated virtual models - capable to simulate physical, chemical and bio changes occurring during production processes - can be realized and used in place of real prototypes and performing experiments, often time and money consuming. Among such techniques, Optimal Shape Design (OSD) (Mohammadi & Pironneau, 2004) represents an interesting approach. While most classical numerical simulations consider fixed geometrical configurations, in OSD a certain number of geometrical degrees of freedom is considered as a part of the unknowns: this implies that the geometry is not completely defined, but part of it is allowed to move dynamically in order to minimize or maximize the objective function. The applications of optimal shape design (OSD) are uncountable. For systems governed by partial differential equations, they range from structure mechanics to electromagnetism and fluid mechanics or to a combination of the three. This paper presents one of possible applications of OSD, particularly how extrusion bell shape, for past production, can be designed by applying a multivariate constrained shape optimization.

  9. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  10. Active vibration mitigation of distributed parameter, smart-type structures using Pseudo-Feedback Optimal Control (PFOC)

    NASA Technical Reports Server (NTRS)

    Patten, W. N.; Robertshaw, H. H.; Pierpont, D.; Wynn, R. H.

    1989-01-01

    A new, near-optimal feedback control technique is introduced that is shown to provide excellent vibration attenuation for those distributed parameter systems that are often encountered in the areas of aeroservoelasticity and large space systems. The technique relies on a novel solution methodology for the classical optimal control problem. Specifically, the quadratic regulator control problem for a flexible vibrating structure is first cast in a weak functional form that admits an approximate solution. The necessary conditions (first-order) are then solved via a time finite-element method. The procedure produces a low dimensional, algebraic parameterization of the optimal control problem that provides a rigorous basis for a discrete controller with a first-order like hold output. Simulation has shown that the algorithm can successfully control a wide variety of plant forms including multi-input/multi-output systems and systems exhibiting significant nonlinearities. In order to firmly establish the efficacy of the algorithm, a laboratory control experiment was implemented to provide planar (bending) vibration attenuation of a highly flexible beam (with a first clamped-free mode of approximately 0.5 Hz).

  11. Basis Function Approximation of Transonic Aerodynamic Influence Coefficient Matrix

    NASA Technical Reports Server (NTRS)

    Li, Wesley Waisang; Pak, Chan-Gi

    2010-01-01

    A technique for approximating the modal aerodynamic influence coefficients [AIC] matrices by using basis functions has been developed and validated. An application of the resulting approximated modal AIC matrix for a flutter analysis in transonic speed regime has been demonstrated. This methodology can be applied to the unsteady subsonic, transonic and supersonic aerodynamics. The method requires the unsteady aerodynamics in frequency-domain. The flutter solution can be found by the classic methods, such as rational function approximation, k, p-k, p, root-locus et cetera. The unsteady aeroelastic analysis for design optimization using unsteady transonic aerodynamic approximation is being demonstrated using the ZAERO(TradeMark) flutter solver (ZONA Technology Incorporated, Scottsdale, Arizona). The technique presented has been shown to offer consistent flutter speed prediction on an aerostructures test wing [ATW] 2 configuration with negligible loss in precision in transonic speed regime. These results may have practical significance in the analysis of aircraft aeroelastic calculation and could lead to a more efficient design optimization cycle

  12. Adaptive multi-resolution Modularity for detecting communities in networks

    NASA Astrophysics Data System (ADS)

    Chen, Shi; Wang, Zhi-Zhong; Bao, Mei-Hua; Tang, Liang; Zhou, Ji; Xiang, Ju; Li, Jian-Ming; Yi, Chen-He

    2018-02-01

    Community structure is a common topological property of complex networks, which attracted much attention from various fields. Optimizing quality functions for community structures is a kind of popular strategy for community detection, such as Modularity optimization. Here, we introduce a general definition of Modularity, by which several classical (multi-resolution) Modularity can be derived, and then propose a kind of adaptive (multi-resolution) Modularity that can combine the advantages of different Modularity. By applying the Modularity to various synthetic and real-world networks, we study the behaviors of the methods, showing the validity and advantages of the multi-resolution Modularity in community detection. The adaptive Modularity, as a kind of multi-resolution method, can naturally solve the first-type limit of Modularity and detect communities at different scales; it can quicken the disconnecting of communities and delay the breakup of communities in heterogeneous networks; and thus it is expected to generate the stable community structures in networks more effectively and have stronger tolerance against the second-type limit of Modularity.

  13. Basis Function Approximation of Transonic Aerodynamic Influence Coefficient Matrix

    NASA Technical Reports Server (NTRS)

    Li, Wesley W.; Pak, Chan-gi

    2011-01-01

    A technique for approximating the modal aerodynamic influence coefficients matrices by using basis functions has been developed and validated. An application of the resulting approximated modal aerodynamic influence coefficients matrix for a flutter analysis in transonic speed regime has been demonstrated. This methodology can be applied to the unsteady subsonic, transonic, and supersonic aerodynamics. The method requires the unsteady aerodynamics in frequency-domain. The flutter solution can be found by the classic methods, such as rational function approximation, k, p-k, p, root-locus et cetera. The unsteady aeroelastic analysis for design optimization using unsteady transonic aerodynamic approximation is being demonstrated using the ZAERO flutter solver (ZONA Technology Incorporated, Scottsdale, Arizona). The technique presented has been shown to offer consistent flutter speed prediction on an aerostructures test wing 2 configuration with negligible loss in precision in transonic speed regime. These results may have practical significance in the analysis of aircraft aeroelastic calculation and could lead to a more efficient design optimization cycle.

  14. Neural networks for feedback feedforward nonlinear control systems.

    PubMed

    Parisini, T; Zoppoli, R

    1994-01-01

    This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.

  15. A traveling salesman approach for predicting protein functions.

    PubMed

    Johnson, Olin; Liu, Jing

    2006-10-12

    Protein-protein interaction information can be used to predict unknown protein functions and to help study biological pathways. Here we present a new approach utilizing the classic Traveling Salesman Problem to study the protein-protein interactions and to predict protein functions in budding yeast Saccharomyces cerevisiae. We apply the global optimization tool from combinatorial optimization algorithms to cluster the yeast proteins based on the global protein interaction information. We then use this clustering information to help us predict protein functions. We use our algorithm together with the direct neighbor algorithm 1 on characterized proteins and compare the prediction accuracy of the two methods. We show our algorithm can produce better predictions than the direct neighbor algorithm, which only considers the immediate neighbors of the query protein. Our method is a promising one to be used as a general tool to predict functions of uncharacterized proteins and a successful sample of using computer science knowledge and algorithms to study biological problems.

  16. A traveling salesman approach for predicting protein functions

    PubMed Central

    Johnson, Olin; Liu, Jing

    2006-01-01

    Background Protein-protein interaction information can be used to predict unknown protein functions and to help study biological pathways. Results Here we present a new approach utilizing the classic Traveling Salesman Problem to study the protein-protein interactions and to predict protein functions in budding yeast Saccharomyces cerevisiae. We apply the global optimization tool from combinatorial optimization algorithms to cluster the yeast proteins based on the global protein interaction information. We then use this clustering information to help us predict protein functions. We use our algorithm together with the direct neighbor algorithm [1] on characterized proteins and compare the prediction accuracy of the two methods. We show our algorithm can produce better predictions than the direct neighbor algorithm, which only considers the immediate neighbors of the query protein. Conclusion Our method is a promising one to be used as a general tool to predict functions of uncharacterized proteins and a successful sample of using computer science knowledge and algorithms to study biological problems. PMID:17147783

  17. Multivariate Approaches for Simultaneous Determination of Avanafil and Dapoxetine by UV Chemometrics and HPLC-QbD in Binary Mixtures and Pharmaceutical Product.

    PubMed

    2016-04-07

    Multivariate UV-spectrophotometric methods and Quality by Design (QbD) HPLC are described for concurrent estimation of avanafil (AV) and dapoxetine (DP) in the binary mixture and in the dosage form. Chemometric methods have been developed, including classical least-squares, principal component regression, partial least-squares, and multiway partial least-squares. Analytical figures of merit, such as sensitivity, selectivity, analytical sensitivity, LOD, and LOQ were determined. QbD consists of three steps, starting with the screening approach to determine the critical process parameter and response variables. This is followed by understanding of factors and levels, and lastly the application of a Box-Behnken design containing four critical factors that affect the method. From an Ishikawa diagram and a risk assessment tool, four main factors were selected for optimization. Design optimization, statistical calculation, and final-condition optimization of all the reactions were Carried out. Twenty-five experiments were done, and a quadratic model was used for all response variables. Desirability plot, surface plot, design space, and three-dimensional plots were calculated. In the optimized condition, HPLC separation was achieved on Phenomenex Gemini C18 column (250 × 4.6 mm, 5 μm) using acetonitrile-buffer (ammonium acetate buffer at pH 3.7 with acetic acid) as a mobile phase at flow rate of 0.7 mL/min. Quantification was done at 239 nm, and temperature was set at 20°C. The developed methods were validated and successfully applied for simultaneous determination of AV and DP in the dosage form.

  18. State transformations and Hamiltonian structures for optimal control in discrete systems

    NASA Astrophysics Data System (ADS)

    Sieniutycz, S.

    2006-04-01

    Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.

  19. Panel Flutter Emulation Using a Few Concentrated Forces

    NASA Astrophysics Data System (ADS)

    Dhital, Kailash; Han, Jae-Hung

    2018-04-01

    The objective of this paper is to study the feasibility of panel flutter emulation using a few concentrated forces. The concentrated forces are considered to be equivalent to aerodynamic forces. The equivalence is carried out using surface spline method and principle of virtual work. The structural modeling of the plate is based on the classical plate theory and the aerodynamic modeling is based on the piston theory. The present approach differs from the linear panel flutter analysis in scheming the modal aerodynamics forces with unchanged structural properties. The solutions for the flutter problem are obtained numerically using the standard eigenvalue procedure. A few concentrated forces were considered with an optimization effort to decide their optimal locations. The optimization process is based on minimizing the error between the flutter bounds from emulated and linear flutter analysis method. The emulated flutter results for the square plate of four different boundary conditions using six concentrated forces are obtained with minimal error to the reference value. The results demonstrated the workability and viability of using concentrated forces in emulating real panel flutter. In addition, the paper includes the parametric studies of linear panel flutter whose proper literatures are not available.

  20. Approximation of optimal filter for Ornstein-Uhlenbeck process with quantised discrete-time observation

    NASA Astrophysics Data System (ADS)

    Bania, Piotr; Baranowski, Jerzy

    2018-02-01

    Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.

  1. Profiling of modified nucleosides from ribonucleic acid digestion by supercritical fluid chromatography coupled to high resolution mass spectrometry.

    PubMed

    Laboureur, Laurent; Guérineau, Vincent; Auxilien, Sylvie; Yoshizawa, Satoko; Touboul, David

    2018-02-16

    A method based on supercritical fluid chromatography coupled to high resolution mass spectrometry for the profiling of canonical and modified nucleosides was optimized, and compared to classical reverse-phase liquid chromatography in terms of separation, number of detected modified nucleosides and sensitivity. Limits of detection and quantification were measured using statistical method and quantifications of twelve nucleosides of a tRNA digest from E. coli are in good agreement with previously reported data. Results highlight the complementarity of both separation techniques to cover the largest view of nucleoside modifications for forthcoming epigenetic studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Advection modes by optimal mass transfer

    NASA Astrophysics Data System (ADS)

    Iollo, Angelo; Lombardi, Damiano

    2014-02-01

    Classical model reduction techniques approximate the solution of a physical model by a limited number of global modes. These modes are usually determined by variants of principal component analysis. Global modes can lead to reduced models that perform well in terms of stability and accuracy. However, when the physics of the model is mainly characterized by advection, the nonlocal representation of the solution by global modes essentially reduces to a Fourier expansion. In this paper we describe a method to determine a low-order representation of advection. This method is based on the solution of Monge-Kantorovich mass transfer problems. Examples of application to point vortex scattering, Korteweg-de Vries equation, and hurricane Dean advection are discussed.

  3. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    NASA Astrophysics Data System (ADS)

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M. N.; Head-Gordon, Teresa; Skylaris, Chris-Kriton

    2017-03-01

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.

  4. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory.

    PubMed

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M N; Head-Gordon, Teresa; Skylaris, Chris-Kriton

    2017-03-28

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes-in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.

  5. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    DOE PAGES

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; ...

    2017-03-28

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less

  6. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less

  7. Influence of musical and psychoacoustical training on pitch discrimination.

    PubMed

    Micheyl, Christophe; Delhommeau, Karine; Perrot, Xavier; Oxenham, Andrew J

    2006-09-01

    This study compared the influence of musical and psychoacoustical training on auditory pitch discrimination abilities. In a first experiment, pitch discrimination thresholds for pure and complex tones were measured in 30 classical musicians and 30 non-musicians, none of whom had prior psychoacoustical training. The non-musicians' mean thresholds were more than six times larger than those of the classical musicians initially, and still about four times larger after 2h of training using an adaptive two-interval forced-choice procedure; this difference is two to three times larger than suggested by previous studies. The musicians' thresholds were close to those measured in earlier psychoacoustical studies using highly trained listeners, and showed little improvement with training; this suggests that classical musical training can lead to optimal or nearly optimal pitch discrimination performance. A second experiment was performed to determine how much additional training was required for the non-musicians to obtain thresholds as low as those of the classical musicians from experiment 1. Eight new non-musicians with no prior training practiced the frequency discrimination task for a total of 14 h. It took between 4 and 8h of training for their thresholds to become as small as those measured in the classical musicians from experiment 1. These findings supplement and qualify earlier data in the literature regarding the respective influence of musical and psychoacoustical training on pitch discrimination performance.

  8. Establishment and validation for the theoretical model of the vehicle airbag

    NASA Astrophysics Data System (ADS)

    Zhang, Junyuan; Jin, Yang; Xie, Lizhe; Chen, Chao

    2015-05-01

    The current design and optimization of the occupant restraint system (ORS) are based on numerous actual tests and mathematic simulations. These two methods are overly time-consuming and complex for the concept design phase of the ORS, though they're quite effective and accurate. Therefore, a fast and directive method of the design and optimization is needed in the concept design phase of the ORS. Since the airbag system is a crucial part of the ORS, in this paper, a theoretical model for the vehicle airbag is established in order to clarify the interaction between occupants and airbags, and further a fast design and optimization method of airbags in the concept design phase is made based on the proposed theoretical model. First, the theoretical expression of the simplified mechanical relationship between the airbag's design parameters and the occupant response is developed based on classical mechanics, then the momentum theorem and the ideal gas state equation are adopted to illustrate the relationship between airbag's design parameters and occupant response. By using MATLAB software, the iterative algorithm method and discrete variables are applied to the solution of the proposed theoretical model with a random input in a certain scope. And validations by MADYMO software prove the validity and accuracy of this theoretical model in two principal design parameters, the inflated gas mass and vent diameter, within a regular range. This research contributes to a deeper comprehension of the relation between occupants and airbags, further a fast design and optimization method for airbags' principal parameters in the concept design phase, and provides the range of the airbag's initial design parameters for the subsequent CAE simulations and actual tests.

  9. On the theory of singular optimal controls in dynamic systems with control delay

    NASA Astrophysics Data System (ADS)

    Mardanov, M. J.; Melikov, T. K.

    2017-05-01

    An optimal control problem with a control delay is considered, and a more broad class of singular (in classical sense) controls is investigated. Various sequences of necessary conditions for the optimality of singular controls in recurrent form are obtained. These optimality conditions include analogues of the Kelley, Kopp-Moyer, R. Gabasov, and equality-type conditions. In the proof of the main results, the variation of the control is defined using Legendre polynomials.

  10. Opinion control in complex networks

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki

    2015-03-01

    In many political elections, the electorate appears to be a composite of partisan and independent voters. Given that partisans are not likely to convert to a different party, an important goal for a political party could be to mobilize independent voters toward the party with the help of strong leadership, mass media, partisans, and the effects of peer-to-peer influence. Based on the exact solution of classical voter model dynamics in the presence of perfectly partisan voters (i.e., zealots), we propose a computational method that uses pinning control strategy to maximize the share of a party in a social network of independent voters. The party, corresponding to the controller or zealots, optimizes the nodes to be controlled given the information about the connectivity of independent voters and the set of nodes that the opposing party controls. We show that controlling hubs is generally a good strategy, but the optimized strategy is even better. The superiority of the optimized strategy is particularly eminent when the independent voters are connected as directed (rather than undirected) networks.

  11. A new graph-based method for pairwise global network alignment

    PubMed Central

    Klau, Gunnar W

    2009-01-01

    Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162

  12. Calibration of DEM parameters on shear test experiments using Kriging method

    NASA Astrophysics Data System (ADS)

    Bednarek, Xavier; Martin, Sylvain; Ndiaye, Abibatou; Peres, Véronique; Bonnefoy, Olivier

    2017-06-01

    Calibration of powder mixing simulation using Discrete-Element-Method is still an issue. Achieving good agreement with experimental results is difficult because time-efficient use of DEM involves strong assumptions. This work presents a methodology to calibrate DEM parameters using Efficient Global Optimization (EGO) algorithm based on Kriging interpolation method. Classical shear test experiments are used as calibration experiments. The calibration is made on two parameters - Young modulus and friction coefficient. The determination of the minimal number of grains that has to be used is a critical step. Simulations of a too small amount of grains would indeed not represent the realistic behavior of powder when using huge amout of grains will be strongly time consuming. The optimization goal is the minimization of the objective function which is the distance between simulated and measured behaviors. The EGO algorithm uses the maximization of the Expected Improvement criterion to find next point that has to be simulated. This stochastic criterion handles with the two interpolations made by the Kriging method : prediction of the objective function and estimation of the error made. It is thus able to quantify the improvement in the minimization that new simulations at specified DEM parameters would lead to.

  13. An improved method for the calculation of Near-Field Acoustic Radiation Modes

    NASA Astrophysics Data System (ADS)

    Liu, Zu-Bin; Maury, Cédric

    2016-02-01

    Sensing and controlling Acoustic Radiation Modes (ARMs) in the near-field of vibrating structures is of great interest for broadband noise reduction or enhancement, as ARMs are velocity distributions defined over a vibrating surface, that independently and optimally contribute to the acoustic power in the acoustic field. But present methods only provide far-field ARMs (FFARMs) that are inadequate for the acoustic near-field problem. The Near-Field Acoustic Radiation Modes (NFARMs) are firstly studied with an improved numerical method, the Pressure-Velocity method, which rely on the eigen decomposition of the acoustic transfers between the vibrating source and a conformal observation surface, including sound pressure and velocity transfer matrices. The active and reactive parts of the sound power are separated and lead to the active and reactive ARMs. NFARMs are studied for a 2D baffled beam and for a 3D baffled plate, and so as differences between the NFARMS and the classical FFARMs. Comparisons of the NFARMs are analyzed when varying frequency and observation distance to the source. It is found that the efficiencies and shapes of the optimal active ARMs are independent on the distance while that of the reactive ones are distinctly related on.

  14. Elastic models: a comparative study applied to retinal images.

    PubMed

    Karali, E; Lambropoulou, S; Koutsouris, D

    2011-01-01

    In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.

  15. Reconstruction of local perturbations in periodic surfaces

    NASA Astrophysics Data System (ADS)

    Lechleiter, Armin; Zhang, Ruming

    2018-03-01

    This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.

  16. Optimal navigation for characterizing the role of the nodes in complex networks

    NASA Astrophysics Data System (ADS)

    Cajueiro, Daniel O.

    2010-05-01

    In this paper, we explore how the approach of optimal navigation (Cajueiro (2009) [33]) can be used to evaluate the centrality of a node and to characterize its role in a network. Using the subway network of Boston and the London rapid transit rail as proxies for complex networks, we show that the centrality measures inherited from the approach of optimal navigation may be considered if one desires to evaluate the centrality of the nodes using other pieces of information beyond the geometric properties of the network. Furthermore, evaluating the correlations between these inherited measures and classical measures of centralities such as the degree of a node and the characteristic path length of a node, we have found two classes of results. While for the London rapid transit rail, these inherited measures can be easily explained by these classical measures of centrality, for the Boston underground transportation system we have found nontrivial results.

  17. Sensitivity analysis, approximate analysis, and design optimization for internal and external viscous flows

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.

    1991-01-01

    A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.

  18. Optimal subsystem approach to multi-qubit quantum state discrimination and experimental investigation

    NASA Astrophysics Data System (ADS)

    Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun

    2018-02-01

    Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.

  19. Cryptosporidium-contaminated water disinfection by a novel Fenton process.

    PubMed

    Matavos-Aramyan, Sina; Moussavi, Mohsen; Matavos-Aramyan, Hedieh; Roozkhosh, Sara

    2017-05-01

    Three novel modified advanced oxidation process systems including ascorbic acid-, pro-oxidants- and ascorbic acid-pro-oxidants-modified Fenton system were utilized to study the disinfection efficiency on Cryptosporidium-contaminated drinking water samples. Different concentrations of divalent and trivalent iron ions, hydrogen peroxide, ascorbic acid and pro-oxidants at different exposure times were investigated. These novel systems were also compared to the classic Fenton system and to the control system which comprised of only hydrogen peroxide. The complete in vitro mechanism of the mentioned modified Fenton systems are also provided. The results pointed out that by considering the optimal parameter limitations, the ascorbic acid-modified Fenton system decreased the Cryptosporidium oocytes viability to 3.91%, while the pro-oxidant-modified and ascorbic acid-pro-oxidant-modified Fenton system achieved an oocytes viability equal to 1.66% and 0%, respectively. The efficiency of the classic Fenton at optimal condition was observed to be 20.12% of oocytes viability. The control system achieved 86.14% of oocytes viability. The optimum values of the operational parameters during this study are found to be 80mgL -1 for the divalent iron, 30mgL -1 for ascorbic acid, 30mmol for hydrogen peroxide, 25mgL -1 for pro-oxidants and an exposure time equal to 5min. The ascorbic acid-pro-oxidants-modified Fenton system achieved a promising complete water disinfection (0% viability) at the optimal conditions, leaving this method a feasible process for water disinfection or decontamination, even at industrial scales. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Aerodynamic design of axisymmetric hypersonic wind-tunnel nozzles using least-squares/parabolized Navier-Stokes procedure

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1992-01-01

    A new procedure unifying the best of present classical design practices, CFD and optimization procedures, is demonstrated for designing the aerodynamic lines of hypersonic wind tunnel nozzles. This procedure can be employed to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been demonstrated to break down. Advantages of this procedure allow full utilization of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, may be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure.

  1. Least-squares/parabolized Navier-Stokes procedure for optimizing hypersonic wind tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.; Kumar, Ajay; Singh, D. J.; Grossman, B.

    1991-01-01

    A new procedure is demonstrated for optimizing hypersonic wind-tunnel-nozzle contours. The procedure couples a CFD computer code to an optimization algorithm, and is applied to both conical and contoured hypersonic nozzles for the purpose of determining an optimal set of parameters to describe the surface geometry. A design-objective function is specified based on the deviation from the desired test-section flow-field conditions. The objective function is minimized by optimizing the parameters used to describe the nozzle contour based on the solution to a nonlinear least-squares problem. The effect of the changes in the nozzle wall parameters are evaluated by computing the nozzle flow using the parabolized Navier-Stokes equations. The advantage of the new procedure is that it directly takes into account the displacement effect of the boundary layer on the wall contour. The new procedure provides a method for optimizing hypersonic nozzles of high Mach numbers which have been designed by classical procedures, but are shown to produce poor flow quality due to the large boundary layers present in the test section. The procedure is demonstrated by finding the optimum design parameters for a Mach 10 conical nozzle and a Mach 6 and a Mach 15 contoured nozzle.

  2. Second-Order Asymptotics for the Classical Capacity of Image-Additive Quantum Channels

    NASA Astrophysics Data System (ADS)

    Tomamichel, Marco; Tan, Vincent Y. F.

    2015-08-01

    We study non-asymptotic fundamental limits for transmitting classical information over memoryless quantum channels, i.e. we investigate the amount of classical information that can be transmitted when a quantum channel is used a finite number of times and a fixed, non-vanishing average error is permissible. In this work we consider the classical capacity of quantum channels that are image-additive, including all classical to quantum channels, as well as the product state capacity of arbitrary quantum channels. In both cases we show that the non-asymptotic fundamental limit admits a second-order approximation that illustrates the speed at which the rate of optimal codes converges to the Holevo capacity as the blocklength tends to infinity. The behavior is governed by a new channel parameter, called channel dispersion, for which we provide a geometrical interpretation.

  3. Closed-loop endo-atmospheric ascent guidance for reusable launch vehicle

    NASA Astrophysics Data System (ADS)

    Sun, Hongsheng

    This dissertation focuses on the development of a closed-loop endo-atmospheric ascent guidance algorithm for the 2nd generation reusable launch vehicle. Special attention has been given to the issues that impact on viability, complexity and reliability in on-board implementation. The algorithm is called once every guidance update cycle to recalculate the optimal solution based on the current flight condition, taking into account atmospheric effects and path constraints. This is different from traditional ascent guidance algorithms which operate in a simple open-loop mode inside atmosphere, and later switch to a closed-loop vacuum ascent guidance scheme. The classical finite difference method is shown to be well suited for fast solution of the constrained optimal three-dimensional ascent problem. The initial guesses for the solutions are generated using an analytical vacuum optimal ascent guidance algorithm. Homotopy method is employed to gradually introduce the aerodynamic forces to generate the optimal solution from the optimal vacuum solution. The vehicle chosen for this study is the Lockheed Martin X-33 lifting-body reusable launch vehicle. To verify the algorithm presented in this dissertation, a series of open-loop and closed-loop tests are performed for three different missions. Wind effects are also studied in the closed-loop simulations. For comparison, the solutions for the same missions are also obtained by two independent optimization softwares. The results clearly establish the feasibility of closed-loop endo-atmospheric ascent guidance of rocket-powered launch vehicles. ATO cases are also tested to assess the adaptability of the algorithm to autonomously incorporate the abort modes.

  4. Synthesis of a controller for stabilizing the motion of a rigid body about a fixed point

    NASA Astrophysics Data System (ADS)

    Zabolotnov, Yu. M.; Lobanov, A. A.

    2017-05-01

    A method for the approximate design of an optimal controller for stabilizing the motion of a rigid body about a fixed point is considered. It is assumed that rigid body motion is nearly the motion in the classical Lagrange case. The method is based on the common use of the Bellman dynamic programming principle and the averagingmethod. The latter is used to solve theHamilton-Jacobi-Bellman equation approximately, which permits synthesizing the controller. The proposed method for controller design can be used in many problems close to the problem of motion of the Lagrange top (the motion of a rigid body in the atmosphere, the motion of a rigid body fastened to a cable in deployment of the orbital cable system, etc.).

  5. Weak Galerkin method for the Biot’s consolidation model

    DOE PAGES

    Hu, Xiaozhe; Mu, Lin; Ye, Xiu

    2017-08-23

    In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less

  6. Weak Galerkin method for the Biot’s consolidation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiaozhe; Mu, Lin; Ye, Xiu

    In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less

  7. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  8. An A Priori Multiobjective Optimization Model of a Search and Rescue Network

    DTIC Science & Technology

    1992-03-01

    sequences. Classical sensitivity analysis and tolerance analysis were used to analyze the frequency assignments generated by the different weight...function for excess coverage of a frequency. Sensitivity analysis is used to investigate the robustness of the frequency assignments produced by the...interest. The linear program solution is used to produce classical sensitivity analysis for the weight ranges. 17 III. Model Formulation This chapter

  9. Processing the Bouguer anomaly map of Biga and the surrounding area by the cellular neural network: application to the southwestern Marmara region

    NASA Astrophysics Data System (ADS)

    Aydogan, D.

    2007-04-01

    An image processing technique called the cellular neural network (CNN) approach is used in this study to locate geological features giving rise to gravity anomalies such as faults or the boundary of two geologic zones. CNN is a stochastic image processing technique based on template optimization using the neighborhood relationships of cells. These cells can be characterized by a functional block diagram that is typical of neural network theory. The functionality of CNN is described in its entirety by a number of small matrices (A, B and I) called the cloning template. CNN can also be considered to be a nonlinear convolution of these matrices. This template describes the strength of the nearest neighbor interconnections in the network. The recurrent perceptron learning algorithm (RPLA) is used in optimization of cloning template. The CNN and standard Canny algorithms were first tested on two sets of synthetic gravity data with the aim of checking the reliability of the proposed approach. The CNN method was compared with classical derivative techniques by applying the cross-correlation method (CC) to the same anomaly map as this latter approach can detect some features that are difficult to identify on the Bouguer anomaly maps. This approach was then applied to the Bouguer anomaly map of Biga and its surrounding area, in Turkey. Structural features in the area between Bandirma, Biga, Yenice and Gonen in the southwest Marmara region are investigated by applying the CNN and CC to the Bouguer anomaly map. Faults identified by these algorithms are generally in accordance with previously mapped surface faults. These examples show that the geologic boundaries can be detected from Bouguer anomaly maps using the cloning template approach. A visual evaluation of the outputs of the CNN and CC approaches is carried out, and the results are compared with each other. This approach provides quantitative solutions based on just a few assumptions, which makes the method more powerful than the classical methods.

  10. Topology Optimization - Engineering Contribution to Architectural Design

    NASA Astrophysics Data System (ADS)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2017-10-01

    The idea of the topology optimization is to find within a considered design domain the distribution of material that is optimal in some sense. Material, during optimization process, is redistributed and parts that are not necessary from objective point of view are removed. The result is a solid/void structure, for which an objective function is minimized. This paper presents an application of topology optimization to multi-material structures. The design domain defined by shape of a structure is divided into sub-regions, for which different materials are assigned. During design process material is relocated, but only within selected region. The proposed idea has been inspired by architectural designs like multi-material facades of buildings. The effectiveness of topology optimization is determined by proper choice of numerical optimization algorithm. This paper utilises very efficient heuristic method called Cellular Automata. Cellular Automata are mathematical, discrete idealization of a physical systems. Engineering implementation of Cellular Automata requires decomposition of the design domain into a uniform lattice of cells. It is assumed, that the interaction between cells takes place only within the neighbouring cells. The interaction is governed by simple, local update rules, which are based on heuristics or physical laws. The numerical studies show, that this method can be attractive alternative to traditional gradient-based algorithms. The proposed approach is evaluated by selected numerical examples of multi-material bridge structures, for which various material configurations are examined. The numerical studies demonstrated a significant influence the material sub-regions location on the final topologies. The influence of assumed volume fraction on final topologies for multi-material structures is also observed and discussed. The results of numerical calculations show, that this approach produces different results as compared with classical one-material problems.

  11. A particle swarm optimized kernel-based clustering method for crop mapping from multi-temporal polarimetric L-band SAR observations

    NASA Astrophysics Data System (ADS)

    Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza

    2017-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.

  12. Quantum-classical interface based on single flux quantum digital logic

    NASA Astrophysics Data System (ADS)

    McDermott, R.; Vavilov, M. G.; Plourde, B. L. T.; Wilhelm, F. K.; Liebermann, P. J.; Mukhanov, O. A.; Ohki, T. A.

    2018-04-01

    We describe an approach to the integrated control and measurement of a large-scale superconducting multiqubit array comprising up to 108 physical qubits using a proximal coprocessor based on the Single Flux Quantum (SFQ) digital logic family. Coherent control is realized by irradiating the qubits directly with classical bitstreams derived from optimal control theory. Qubit measurement is performed by a Josephson photon counter, which provides access to the classical result of projective quantum measurement at the millikelvin stage. We analyze the power budget and physical footprint of the SFQ coprocessor and discuss challenges and opportunities associated with this approach.

  13. Ab Initio Theoretical Studies on the Kinetics of Hydrogen Abstraction Type Reactions of Hydroxyl Radicals with CH3CCl2F and CH3CClF2

    NASA Astrophysics Data System (ADS)

    Saheb, Vahid; Maleki, Samira

    2018-03-01

    The hydrogen abstraction reactions from CH3Cl2F (R-141b) and CH3CClF2 (R-142b) by OH radicals are studied theoretically by semi-classical transition state theory. The stationary points for the reactions are located by using KMLYP density functional method along with 6-311++G(2 d,2 p) basis set and MP2 method along with 6-311+G( d, p) basis set. Single-point energy calculations are performed by the CBS-Q and G4 combination methods on the geometries optimized at the KMLYP/6-311++G(2 d,2 p) level of theory. Vibrational anharmonicity coefficients, x ij , which are needed for semi-classical transition state theory calculations, are computed at the KMLYP/6-311++G(2 d,2 p) and MP2/6-311+G( d, p) levels of theory. The computed barrier heights are slightly sensitive to the quantum-chemical method. Thermal rate coefficients are computed over the temperature range from 200 to 2000 K and they are shown to be in accordance with available experimental data. On the basis of the computed rate coefficients, the tropospheric lifetime of the CH3CCl2F and CH3CClF2 are estimated to be about 6.5 and 12.0 years, respectively.

  14. Quasivelocities and Optimal Control for underactuated Mechanical Systems

    NASA Astrophysics Data System (ADS)

    Colombo, L.; de Diego, D. Martín

    2010-07-01

    This paper is concerned with the application of the theory of quasivelocities for optimal control for underactuated mechanical systems. Using this theory, we convert the original problem in a variational second-order lagrangian system subjected to constraints. The equations of motion are geometrically derived using an adaptation of the classical Skinner and Rusk formalism.

  15. Quasivelocities and Optimal Control for underactuated Mechanical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colombo, L.; Martin de Diego, D.

    2010-07-28

    This paper is concerned with the application of the theory of quasivelocities for optimal control for underactuated mechanical systems. Using this theory, we convert the original problem in a variational second-order lagrangian system subjected to constraints. The equations of motion are geometrically derived using an adaptation of the classical Skinner and Rusk formalism.

  16. Adaptive synchronized switch damping on an inductor: a self-tuning switching law

    NASA Astrophysics Data System (ADS)

    Kelley, Christopher R.; Kauffman, Jeffrey L.

    2017-03-01

    Synchronized switch damping (SSD) techniques exploit low-power switching between passive circuits connected to piezoelectric material to reduce structural vibration. In the classical implementation of SSD, the piezoelectric material remains in an open circuit for the majority of the vibration cycle and switches briefly to a shunt circuit at every displacement extremum. Recent research indicates that this switch timing is only optimal for excitation exactly at resonance and points to more general optimal switch criteria based on the phase of the displacement and the system parameters. This work proposes a self-tuning approach that implements the more general optimal switch timing for synchronized switch damping on an inductor (SSDI) without needing any knowledge of the system parameters. The law involves a gradient-based search optimization that is robust to noise and uncertainties in the system. Testing of a physical implementation confirms this law successfully adapts to the frequency and parameters of the system. Overall, the adaptive SSDI controller provides better off-resonance steady-state vibration reduction than classical SSDI while matching performance at resonance.

  17. Energy-landscape paving for prediction of face-centered-cubic hydrophobic-hydrophilic lattice model proteins

    NASA Astrophysics Data System (ADS)

    Liu, Jingfa; Song, Beibei; Liu, Zhaoxia; Huang, Weibo; Sun, Yuanyuan; Liu, Wenjie

    2013-11-01

    Protein structure prediction (PSP) is a classical NP-hard problem in computational biology. The energy-landscape paving (ELP) method is a class of heuristic global optimization algorithm, and has been successfully applied to solving many optimization problems with complex energy landscapes in the continuous space. By putting forward a new update mechanism of the histogram function in ELP and incorporating the generation of initial conformation based on the greedy strategy and the neighborhood search strategy based on pull moves into ELP, an improved energy-landscape paving (ELP+) method is put forward. Twelve general benchmark instances are first tested on both two-dimensional and three-dimensional (3D) face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice models. The lowest energies by ELP+ are as good as or better than those of other methods in the literature for all instances. Then, five sets of larger-scale instances, denoted by S, R, F90, F180, and CASP target instances on the 3D FCC HP lattice model are tested. The proposed algorithm finds lower energies than those by the five other methods in literature. Not unexpectedly, this is particularly pronounced for the longer sequences considered. Computational results show that ELP+ is an effective method for PSP on the fcc HP lattice model.

  18. A mixture approach to the acoustic properties of a macroscopically inhomogeneous porous aluminum in the equivalent fluid approximation.

    PubMed

    Sacristan, C J; Dupont, T; Sicot, O; Leclaire, P; Verdière, K; Panneton, R; Gong, X L

    2016-10-01

    The acoustic properties of an air-saturated macroscopically inhomogeneous aluminum foam in the equivalent fluid approximation are studied. A reference sample built by forcing a highly compressible melamine foam with conical shape inside a constant diameter rigid tube is studied first. In this process, a radial compression varying with depth is applied. With the help of an assumption on the compressed pore geometry, properties of the reference sample can be modelled everywhere in the thickness and it is possible to use the classical transfer matrix method as theoretical reference. In the mixture approach, the material is viewed as a mixture of two known materials placed in a patchwork configuration and with proportions of each varying with depth. The properties are derived from the use of a mixing law. For the reference sample, the classical transfer matrix method is used to validate the experimental results. These results are used to validate the mixture approach. The mixture approach is then used to characterize a porous aluminium for which only the properties of the external faces are known. A porosity profile is needed and is obtained from the simulated annealing optimization process.

  19. Piezoelectrically actuated flextensional micromachined ultrasound transducers--I: theory.

    PubMed

    Perçin, Gökhan; Khuri-Yakub, Butrus T

    2002-05-01

    This series of two papers considers piezoelectrically actuated flextensional micromachined ultrasound transducers (PAFMUTs) and consists of theory, fabrication, and experimental parts. The theory presented in this paper is developed for an ultrasound transducer application presented in the second part. In the absence of analytical expressions for the equivalent circuit parameters of a flextensional transducer, it is difficult to calculate its optimal parameters and dimensions and difficult to choose suitable materials. The influence of coupling between flexural and extensional deformation and that of coupling between the structure and the acoustic volume on the dynamic response of piezoelectrically actuated flextensional transducer are analyzed using two analytical methods: classical thin (Kirchhoff) plate theory and Mindlin plate theory. Classical thin plate theory and Mindlin plate theory are applied to derive two-dimensional plate equations for the transducer and to calculate the coupled electromechanical field variables such as mechanical displacement and electrical input impedance. In these methods, the variations across the thickness direction vanish by using the bending moments per unit length or stress resultants. Thus, two-dimensional plate equations for a step-wise laminated circular plate are obtained as well as two different solutions to the corresponding systems. An equivalent circuit of the transducer is also obtained from these solutions.

  20. Application of principal component regression and artificial neural network in FT-NIR soluble solids content determination of intact pear fruit

    NASA Astrophysics Data System (ADS)

    Ying, Yibin; Liu, Yande; Fu, Xiaping; Lu, Huishan

    2005-11-01

    The artificial neural networks (ANNs) have been used successfully in applications such as pattern recognition, image processing, automation and control. However, majority of today's applications of ANNs is back-propagate feed-forward ANN (BP-ANN). In this paper, back-propagation artificial neural networks (BP-ANN) were applied for modeling soluble solid content (SSC) of intact pear from their Fourier transform near infrared (FT-NIR) spectra. One hundred and sixty-four pear samples were used to build the calibration models and evaluate the models predictive ability. The results are compared to the classical calibration approaches, i.e. principal component regression (PCR), partial least squares (PLS) and non-linear PLS (NPLS). The effects of the optimal methods of training parameters on the prediction model were also investigated. BP-ANN combine with principle component regression (PCR) resulted always better than the classical PCR, PLS and Weight-PLS methods, from the point of view of the predictive ability. Based on the results, it can be concluded that FT-NIR spectroscopy and BP-ANN models can be properly employed for rapid and nondestructive determination of fruit internal quality.

  1. Dynamics of open quantum systems by interpolation of von Neumann and classical master equations, and its application to quantum annealing

    NASA Astrophysics Data System (ADS)

    Kadowaki, Tadashi

    2018-02-01

    We propose a method to interpolate dynamics of von Neumann and classical master equations with an arbitrary mixing parameter to investigate the thermal effects in quantum dynamics. The two dynamics are mixed by intervening to continuously modify their solutions, thus coupling them indirectly instead of directly introducing a coupling term. This maintains the quantum system in a pure state even after the introduction of thermal effects and obtains not only a density matrix but also a state vector representation. Further, we demonstrate that the dynamics of a two-level system can be rewritten as a set of standard differential equations, resulting in quantum dynamics that includes thermal relaxation. These equations are equivalent to the optical Bloch equations at the weak coupling and asymptotic limits, implying that the dynamics cause thermal effects naturally. Numerical simulations of ferromagnetic and frustrated systems support this idea. Finally, we use this method to study thermal effects in quantum annealing, revealing nontrivial performance improvements for a spin glass model over a certain range of annealing time. This result may enable us to optimize the annealing time of real annealing machines.

  2. Arm Locking for the Laser Interferometer Space Antenna

    NASA Technical Reports Server (NTRS)

    Maghami, P. G.; Thorpe, J. I.; Livas, J.

    2009-01-01

    The Laser Interferometer Space Antenna (LISA) mission is a planned gravitational wave detector consisting of three spacecraft in heliocentric orbit. Laser interferometry is used to measure distance fluctuations between test masses aboard each spacecraft to the picometer level over a 5 million kilometer separation. Laser frequency fluctuations must be suppressed in order to meet the measurement requirements. Arm-locking, a technique that uses the constellation of spacecraft as a frequency reference, is a proposed method for stabilizing the laser frequency. We consider the problem of arm-locking using classical optimal control theory and find that our designs satisfy the LISA requirements.

  3. Pure sources and efficient detectors for optical quantum information processing

    NASA Astrophysics Data System (ADS)

    Zielnicki, Kevin

    Over the last sixty years, classical information theory has revolutionized the understanding of the nature of information, and how it can be quantified and manipulated. Quantum information processing extends these lessons to quantum systems, where the properties of intrinsic uncertainty and entanglement fundamentally defy classical explanation. This growing field has many potential applications, including computing, cryptography, communication, and metrology. As inherently mobile quantum particles, photons are likely to play an important role in any mature large-scale quantum information processing system. However, the available methods for producing and detecting complex multi-photon states place practical limits on the feasibility of sophisticated optical quantum information processing experiments. In a typical quantum information protocol, a source first produces an interesting or useful quantum state (or set of states), perhaps involving superposition or entanglement. Then, some manipulations are performed on this state, perhaps involving quantum logic gates which further manipulate or entangle the intial state. Finally, the state must be detected, obtaining some desired measurement result, e.g., for secure communication or computationally efficient factoring. The work presented here concerns the first and last stages of this process as they relate to photons: sources and detectors. Our work on sources is based on the need for optimized non-classical states of light delivered at high rates, particularly of single photons in a pure quantum state. We seek to better understand the properties of spontaneous parameteric downconversion (SPDC) sources of photon pairs, and in doing so, produce such an optimized source. We report an SPDC source which produces pure heralded single photons with little or no spectral filtering, allowing a significant rate enhancement. Our work on detectors is based on the need to reliably measure single-photon states. We have focused on optimizing the detection efficiency of visible light photon counters (VLPCs), a single-photon detection technology that is also capable of resolving photon number states. We report a record-breaking quantum efficiency of 91 +/- 3% observed with our detection system. Both sources and detectors are independently interesting physical systems worthy of study, but together they promise to enable entire new classes and applications of information based on quantum mechanics.

  4. Comparative analysis of Pareto surfaces in multi-criteria IMRT planning

    NASA Astrophysics Data System (ADS)

    Teichert, K.; Süss, P.; Serna, J. I.; Monz, M.; Küfer, K. H.; Thieke, C.

    2011-06-01

    In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g. photons versus protons) than with the classical method of comparing single treatment plans.

  5. Clean Quantum and Classical Communication Protocols.

    PubMed

    Buhrman, Harry; Christandl, Matthias; Perry, Christopher; Zuiddam, Jeroen

    2016-12-02

    By how much must the communication complexity of a function increase if we demand that the parties not only correctly compute the function but also return all registers (other than the one containing the answer) to their initial states at the end of the communication protocol? Protocols that achieve this are referred to as clean and the associated cost as the clean communication complexity. Here we present clean protocols for calculating the inner product of two n-bit strings, showing that (in the absence of preshared entanglement) at most n+3 qubits or n+O(sqrt[n]) bits of communication are required. The quantum protocol provides inspiration for obtaining the optimal method to implement distributed cnot gates in parallel while minimizing the amount of quantum communication. For more general functions, we show that nearly all Boolean functions require close to 2n bits of classical communication to compute and close to n qubits if the parties have access to preshared entanglement. Both of these values are maximal for their respective paradigms.

  6. The Quantum Socket: Wiring for Superconducting Qubits - Part 1

    NASA Astrophysics Data System (ADS)

    McConkey, T. G.; Bejanin, J. H.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Mariantoni, M.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.

    Quantum systems with ten superconducting quantum bits (qubits) have been realized, making it possible to show basic quantum error correction (QEC) algorithms. However, a truly scalable architecture has not been developed yet. QEC requires a two-dimensional array of qubits, restricting any interconnection to external classical systems to the third axis. In this talk, we introduce an interconnect solution for solid-state qubits: The quantum socket. The quantum socket employs three-dimensional wires and makes it possible to connect classical electronics with quantum circuits more densely and accurately than methods based on wire bonding. The three-dimensional wires are based on spring-loaded pins engineered to insure compatibility with quantum computing applications. Extensive design work and machining was required, with focus on material quality to prevent magnetic impurities. Microwave simulations were undertaken to optimize the design, focusing on the interface between the micro-connector and an on-chip coplanar waveguide pad. Simulations revealed good performance from DC to 10 GHz and were later confirmed against experimental measurements.

  7. Exhaustive Classification of the Invariant Solutions for a Specific Nonlinear Model Describing Near Planar and Marginally Long-Wave Unstable Interfaces for Phase Transition

    NASA Astrophysics Data System (ADS)

    Ahangari, Fatemeh

    2018-05-01

    Problems of thermodynamic phase transition originate inherently in solidification, combustion and various other significant fields. If the transition region among two locally stable phases is adequately narrow, the dynamics can be modeled by an interface motion. This paper is devoted to exhaustive analysis of the invariant solutions for a modified Kuramoto-Sivashinsky equation in two spatial and one temporal dimensions is presented. This nonlinear partial differential equation asymptotically characterizes near planar interfaces, which are marginally long-wave unstable. For this purpose, by applying the classical symmetry method for this model the classical symmetry operators are attained. Moreover, the structure of the Lie algebra of symmetries is discussed and the optimal system of subalgebras, which yields the preliminary classification of group invariant solutions is constructed. Mainly, the Lie invariants corresponding to the infinitesimal symmetry generators as well as associated similarity reduced equations are also pointed out. Furthermore, the nonclassical symmetries of this nonlinear PDE are also comprehensively investigated.

  8. Solving an inverse eigenvalue problem with triple constraints on eigenvalues, singular values, and diagonal elements

    NASA Astrophysics Data System (ADS)

    Wu, Sheng-Jhih; Chu, Moody T.

    2017-08-01

    An inverse eigenvalue problem usually entails two constraints, one conditioned upon the spectrum and the other on the structure. This paper investigates the problem where triple constraints of eigenvalues, singular values, and diagonal entries are imposed simultaneously. An approach combining an eclectic mix of skills from differential geometry, optimization theory, and analytic gradient flow is employed to prove the solvability of such a problem. The result generalizes the classical Mirsky, Sing-Thompson, and Weyl-Horn theorems concerning the respective majorization relationships between any two of the arrays of main diagonal entries, eigenvalues, and singular values. The existence theory fills a gap in the classical matrix theory. The problem might find applications in wireless communication and quantum information science. The technique employed can be implemented as a first-step numerical method for constructing the matrix. With slight modification, the approach might be used to explore similar types of inverse problems where the prescribed entries are at general locations.

  9. Loop optimization for tensor network renormalization

    NASA Astrophysics Data System (ADS)

    Yang, Shuo; Gu, Zheng-Cheng; Wen, Xiao-Gang

    We introduce a tensor renormalization group scheme for coarse-graining a two-dimensional tensor network, which can be successfully applied to both classical and quantum systems on and off criticality. The key idea of our scheme is to deform a 2D tensor network into small loops and then optimize tensors on each loop. In this way we remove short-range entanglement at each iteration step, and significantly improve the accuracy and stability of the renormalization flow. We demonstrate our algorithm in the classical Ising model and a frustrated 2D quantum model. NSF Grant No. DMR-1005541 and NSFC 11274192, BMO Financial Group, John Templeton Foundation, Government of Canada through Industry Canada, Province of Ontario through the Ministry of Economic Development & Innovation.

  10. A Sustainable City Planning Algorithm Based on TLBO and Local Search

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Lin, Li; Huang, Xuanxuan; Liu, Yiming; Zhang, Yonggang

    2017-09-01

    Nowadays, how to design a city with more sustainable features has become a center problem in the field of social development, meanwhile it has provided a broad stage for the application of artificial intelligence theories and methods. Because the design of sustainable city is essentially a constraint optimization problem, the swarm intelligence algorithm of extensive research has become a natural candidate for solving the problem. TLBO (Teaching-Learning-Based Optimization) algorithm is a new swarm intelligence algorithm. Its inspiration comes from the “teaching” and “learning” behavior of teaching class in the life. The evolution of the population is realized by simulating the “teaching” of the teacher and the student “learning” from each other, with features of less parameters, efficient, simple thinking, easy to achieve and so on. It has been successfully applied to scheduling, planning, configuration and other fields, which achieved a good effect and has been paid more and more attention by artificial intelligence researchers. Based on the classical TLBO algorithm, we propose a TLBO_LS algorithm combined with local search. We design and implement the random generation algorithm and evaluation model of urban planning problem. The experiments on the small and medium-sized random generation problem showed that our proposed algorithm has obvious advantages over DE algorithm and classical TLBO algorithm in terms of convergence speed and solution quality.

  11. Constructing and assessing brain templates from Chinese pediatric MRI data using SPM

    NASA Astrophysics Data System (ADS)

    Yin, Qingjie; Ye, Qing; Yao, Li; Chen, Kewei; Jin, Zhen; Liu, Gang; Wu, Xingchun; Wang, Tingting

    2005-04-01

    Spatial normalization is a very important step in the processing of magnetic resonance imaging (MRI) data. So the quality of brain templates is crucial for the accuracy of MRI analysis. In this paper, using the classical protocol and the optimized protocol plus nonlinear deformation, we constructed the T1 whole brain templates and apriori brain tissue data from 69 Chinese pediatric MRI data (age 7-16 years). Then we proposed a new assessment method to evaluate our templates. 10 pediatric subjects were chosen to do the assessment as the following steps. First, the cerebellum region, the region of interest (ROI), was located on both the pediatric volume and the template volume by an experienced neuroanatomist. Second, the pediatric whole brain was mapped to the template with affine and nonlinear deformation. Third, the parameter, derived from the second step, was used to only normalize the ROI of the child to the ROI of the template. Last, the overlapping ratio, which described the overlapping rate between the ROI of the template and the normalized ROI of the child, was calculated. The mean of overlapping ratio normalized to the classical template was 0.9687, and the mean normalized to the optimized template was 0.9713. The results show that the two Chinese pediatric brain templates are comparable and their accuracy is adequate to our studies.

  12. The Optimal Location of GEODSS Sensors in Canada

    DTIC Science & Technology

    1991-02-01

    nteractive procedures for solving multiobjective transportation problems. A transportation problem is a classical linear programming problem where a...product must be transported from each of m sources to any of n destinations such that one or more objectives are optimized (36:96). The first algorithm...0, k - 1,...,L where z, is the fth element of zk The function z’(x) can now be optimized using any efficient, single-objectivc transportation

  13. A spectral hybridizable discontinuous Galerkin method for elastic-acoustic wave propagation

    NASA Astrophysics Data System (ADS)

    Terrana, S.; Vilotte, J. P.; Guillot, L.

    2018-04-01

    We introduce a time-domain, high-order in space, hybridizable discontinuous Galerkin (DG) spectral element method (HDG-SEM) for wave equations in coupled elastic-acoustic media. The method is based on a first-order hyperbolic velocity-strain formulation of the wave equations written in conservative form. This method follows the HDG approach by introducing a hybrid unknown, which is the approximation of the velocity on the elements boundaries, as the only globally (i.e. interelement) coupled degrees of freedom. In this paper, we first present a hybridized formulation of the exact Riemann solver at the element boundaries, taking into account elastic-elastic, acoustic-acoustic and elastic-acoustic interfaces. We then use this Riemann solver to derive an explicit construction of the HDG stabilization function τ for all the above-mentioned interfaces. We thus obtain an HDG scheme for coupled elastic-acoustic problems. This scheme is then discretized in space on quadrangular/hexahedral meshes using arbitrary high-order polynomial basis for both volumetric and hybrid fields, using an approach similar to the spectral element methods. This leads to a semi-discrete system of algebraic differential equations (ADEs), which thanks to the structure of the global conservativity condition can be reformulated easily as a classical system of first-order ordinary differential equations in time, allowing the use of classical explicit or implicit time integration schemes. When an explicit time scheme is used, the HDG method can be seen as a reformulation of a DG with upwind fluxes. The introduction of the velocity hybrid unknown leads to relatively simple computations at the element boundaries which, in turn, makes the HDG approach competitive with the DG-upwind methods. Extensive numerical results are provided to illustrate and assess the accuracy and convergence properties of this HDG-SEM. The approximate velocity is shown to converge with the optimal order of k + 1 in the L2-norm, when element polynomials of order k are used, and to exhibit the classical spectral convergence of SEM. Additional inexpensive local post-processing in both the elastic and the acoustic case allow to achieve higher convergence orders. The HDG scheme provides a natural framework for coupling classical, continuous Galerkin SEM with HDG-SEM in the same simulation, and it is shown numerically in this paper. As such, the proposed HDG-SEM can combine the efficiency of the continuous SEM with the flexibility of the HDG approaches. Finally, more complex numerical results, inspired from real geophysical applications, are presented to illustrate the capabilities of the method for wave propagation in heterogeneous elastic-acoustic media with complex geometries.

  14. Classical Wigner method with an effective quantum force: application to reaction rates.

    PubMed

    Poulsen, Jens Aage; Li, Huaqing; Nyman, Gunnar

    2009-07-14

    We construct an effective "quantum force" to be used in the classical molecular dynamics part of the classical Wigner method when determining correlation functions. The quantum force is obtained by estimating the most important short time separation of the Feynman paths that enter into the expression for the correlation function. The evaluation of the force is then as easy as classical potential energy evaluations. The ideas are tested on three reaction rate problems. The resulting transmission coefficients are in much better agreement with accurate results than transmission coefficients from the ordinary classical Wigner method.

  15. Load Frequency Control of AC Microgrid Interconnected Thermal Power System

    NASA Astrophysics Data System (ADS)

    Lal, Deepak Kumar; Barisal, Ajit Kumar

    2017-08-01

    In this paper, a microgrid (MG) power generation system is interconnected with a single area reheat thermal power system for load frequency control study. A new meta-heuristic optimization algorithm i.e. Moth-Flame Optimization (MFO) algorithm is applied to evaluate optimal gains of the fuzzy based proportional, integral and derivative (PID) controllers. The system dynamic performance is studied by comparing the results with MFO optimized classical PI/PID controllers. Also the system performance is investigated with fuzzy PID controller optimized by recently developed grey wolf optimizer (GWO) algorithm, which has proven its superiority over other previously developed algorithm in many interconnected power systems.

  16. Nonadiabatic Molecular Dynamics and Orthogonality Constrained Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Shushkov, Philip Georgiev

    The exact quantum dynamics of realistic, multidimensional systems remains a formidable computational challenge. In many chemical processes, however, quantum effects such as tunneling, zero-point energy quantization, and nonadiabatic transitions play an important role. Therefore, approximate approaches that improve on the classical mechanical framework are of special practical interest. We propose a novel ring polymer surface hopping method for the calculation of chemical rate constants. The method blends two approaches, namely ring polymer molecular dynamics that accounts for tunneling and zero-point energy quantization, and surface hopping that incorporates nonadiabatic transitions. We test the method against exact quantum mechanical calculations for a one-dimensional, two-state model system. The method reproduces quite accurately the tunneling contribution to the rate and the distribution of reactants between the electronic states for this model system. Semiclassical instanton theory, an approach related to ring polymer molecular dynamics, accounts for tunneling by the use of periodic classical trajectories on the inverted potential energy surface. We study a model of electron transfer in solution, a chemical process where nonadiabatic events are prominent. By representing the tunneling electron with a ring polymer, we derive Marcus theory of electron transfer from semiclassical instanton theory after a careful analysis of the tunneling mode. We demonstrate that semiclassical instanton theory can recover the limit of Fermi's Golden Rule rate in a low-temperature, deep-tunneling regime. Mixed quantum-classical dynamics treats a few important degrees of freedom quantum mechanically, while classical mechanics describes affordably the rest of the system. But the interface of quantum and classical description is a challenging theoretical problem, especially for low-energy chemical processes. We therefore focus on the semiclassical limit of the coupled nuclear-electronic dynamics. We show that the time-dependent Schrodinger equation for the electrons employed in the widely used fewest switches surface hopping method is applicable only in the limit of nearly identical classical trajectories on the different potential energy surfaces. We propose a short-time decoupling algorithm that restricts the use of the Schrodinger equation only to the interaction regions. We test the short-time approximation on three model systems against exact quantum-mechanical calculations. The approximation improves the performance of the surface hopping approach. Nonadiabatic molecular dynamics simulations require the efficient and accurate computation of ground and excited state potential energy surfaces. Unlike the ground state calculations where standard methods exist, the computation of excited state properties is a challenging task. We employ time-independent density functional theory, in which the excited state energy is represented as a functional of the total density. We suggest an adiabatic-like approximation that simplifies the excited state exchange-correlation functional. We also derive a set of minimal conditions to impose exactly the orthogonality of the excited state Kohn-Sham determinant to the ground state determinant. This leads to an efficient, variational algorithm for the self-consistent optimization of the excited state energy. Finally, we assess the quality of the excitation energies obtained by the new method on a set of 28 organic molecules. The new approach provides results of similar accuracy to time-dependent density functional theory.

  17. Simple automatic strategy for background drift correction in chromatographic data analysis.

    PubMed

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-03

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.

    PubMed

    Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei

    2017-05-18

    The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.

  19. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  20. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  1. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    NASA Astrophysics Data System (ADS)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  2. Response surface methodology: A non-conventional statistical tool to maximize the throughput of Streptomyces species biomass and their bioactive metabolites.

    PubMed

    Latha, Selvanathan; Sivaranjani, Govindhan; Dhanasekaran, Dharumadurai

    2017-09-01

    Among diverse actinobacteria, Streptomyces is a renowned ongoing source for the production of a large number of secondary metabolites, furnishing immeasurable pharmacological and biological activities. Hence, to meet the demand of new lead compounds for human and animal use, research is constantly targeting the bioprospecting of Streptomyces. Optimization of media components and physicochemical parameters is a plausible approach for the exploration of intensified production of novel as well as existing bioactive metabolites from various microbes, which is usually achieved by a range of classical techniques including one factor at a time (OFAT). However, the major drawbacks of conventional optimization methods have directed the use of statistical optimization approaches in fermentation process development. Response surface methodology (RSM) is one of the empirical techniques extensively used for modeling, optimization and analysis of fermentation processes. To date, several researchers have implemented RSM in different bioprocess optimization accountable for the production of assorted natural substances from Streptomyces in which the results are very promising. This review summarizes some of the recent RSM adopted studies for the enhanced production of antibiotics, enzymes and probiotics using Streptomyces with the intention to highlight the significance of Streptomyces as well as RSM to the research community and industries.

  3. Optimal simultaneous superpositioning of multiple structures with missing data.

    PubMed

    Theobald, Douglas L; Steindel, Phillip A

    2012-08-01

    Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.

  4. Optimization of the preparation conditions of ceramic products using drinking water treatment sludges.

    PubMed

    Zamora, R M Ramirez; Ayala, F Espesel; Garcia, L Chavez; Moreno, A Duran; Schouwenaars, R

    2008-11-01

    The aim of this work is to optimize, via Response Surface Methodology, the values of the main process parameters for the production of ceramic products using sludges obtained from drinking water treatment in order to valorise them. In the first experimental stage, sludges were collected from a drinking water treatment plant for characterization. In the second stage, trials were carried out to elaborate thin cross-section specimens and fired bricks following an orthogonal central composite design of experiments with three factors (sludge composition, grain size and firing temperature) and five levels. The optimization parameters (Y(1)=shrinking by firing (%), Y(2)=water absorption (%), Y(3)=density (g/cm(3)) and Y(4)=compressive strength (kg/cm(2))) were determined according to standardized analytical methods. Two distinct physicochemical processes were active during firing at different conditions in the experimental design, preventing the determination of a full response surface, which would allow direct optimization of production parameters. Nevertheless, the temperature range for the production of classical red brick was closely delimitated by the results; above this temperature, a lightweight ceramic with surprisingly high strength was produced, opening possibilities for the valorisation of a product with considerably higher added value than what was originally envisioned.

  5. The lowest-order weak Galerkin finite element method for the Darcy equation on quadrilateral and hybrid meshes

    NASA Astrophysics Data System (ADS)

    Liu, Jiangguo; Tavener, Simon; Wang, Zhuoran

    2018-04-01

    This paper investigates the lowest-order weak Galerkin finite element method for solving the Darcy equation on quadrilateral and hybrid meshes consisting of quadrilaterals and triangles. In this approach, the pressure is approximated by constants in element interiors and on edges. The discrete weak gradients of these constant basis functions are specified in local Raviart-Thomas spaces, specifically RT0 for triangles and unmapped RT[0] for quadrilaterals. These discrete weak gradients are used to approximate the classical gradient when solving the Darcy equation. The method produces continuous normal fluxes and is locally mass-conservative, regardless of mesh quality, and has optimal order convergence in pressure, velocity, and normal flux, when the quadrilaterals are asymptotically parallelograms. Implementation is straightforward and results in symmetric positive-definite discrete linear systems. We present numerical experiments and comparisons with other existing methods.

  6. Quantum Optimal Multiple Assignment Scheme for Realizing General Access Structure of Secret Sharing

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ryutaroh

    The multiple assignment scheme is to assign one or more shares to single participant so that any kind of access structure can be realized by classical secret sharing schemes. We propose its quantum version including ramp secret sharing schemes. Then we propose an integer optimization approach to minimize the average share size.

  7. Improving Evaluation Use in Local School Settings. Optimizing Evaluation Use: Final Report.

    ERIC Educational Resources Information Center

    King, Jean A.; And Others

    A project for studying ways to optimize utilization of evaluation products in public schools is reported. The results indicate that the negative picture of use prevalent in recent literature stems from the unrealistic expectation that local decision-makers will behave in a classically rational manner. Such a view ignores the political settings of…

  8. Optimal Consumption When Consumption Takes Time

    ERIC Educational Resources Information Center

    Miller, Norman C.

    2009-01-01

    A classic article by Gary Becker (1965) showed that when it takes time to consume, the first order conditions for optimal consumption require the marginal rate of substitution between any two goods to equal their relative full costs. These include the direct money price and the money value of the time needed to consume each good. This important…

  9. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  10. Blade design and analysis using a modified Euler solver

    NASA Technical Reports Server (NTRS)

    Leonard, O.; Vandenbraembussche, R. A.

    1991-01-01

    An iterative method for blade design based on Euler solver and described in an earlier paper is used to design compressor and turbine blades providing shock free transonic flows. The method shows a rapid convergence, and indicates how much the flow is sensitive to small modifications of the blade geometry, that the classical iterative use of analysis methods might not be able to define. The relationship between the required Mach number distribution and the resulting geometry is discussed. Examples show how geometrical constraints imposed upon the blade shape can be respected by using free geometrical parameters or by relaxing the required Mach number distribution. The same code is used both for the design of the required geometry and for the off-design calculations. Examples illustrate the difficulty of designing blade shapes with optimal performance also outside of the design point.

  11. A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods

    DTIC Science & Technology

    2014-08-01

    Approved for public release; distribution is unlimited. A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods...ABSTRACT A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods Report Title This experiment tests whether a virtual... PEDAGOGICAL EFFECTIVENESS OF VIRTUAL WORLDS AND OF TRADITIONAL TRAINING METHODS A Thesis by BENJAMIN PETERS

  12. Higher-Order Extended Lagrangian Born–Oppenheimer Molecular Dynamics for Classical Polarizable Models

    DOE PAGES

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.

    2018-01-09

    Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less

  13. Improving Efficiency in Multi-Strange Baryon Reconstruction in d-Au at STAR

    NASA Astrophysics Data System (ADS)

    Leight, William

    2003-10-01

    We report preliminary multi-strange baryon measurements for d-Au collisions recorded at RHIC by the STAR experiment. After using classical topological analysis, in which cuts for each discriminating variable are adjusted by hand, we investigate improvements in signal-to-noise optimization using Linear Discriminant Analysis (LDA). LDA is an algorithm for finding, in the n-dimensional space of the n discriminating variables, the axis on which the signal and noise distributions are most separated. LDA is the first step in moving towards more sophisticated techniques for signal-to-noise optimization, such as Artificial Neural Nets. Due to the relatively low background and sufficiently high yields of d-Au collisions, they form an ideal system to study these possibilities for improving reconstruction methods. Such improvements will be extremely important for forthcoming Au-Au runs in which the size of the combinatoric background is a major problem in reconstruction efforts.

  14. Optimizing the diagnostic testing of Clostridium difficile infection.

    PubMed

    Bouza, Emilio; Alcalá, Luis; Reigadas, Elena

    2016-09-01

    Clostridium difficile infection (CDI) is the leading cause of hospital-acquired diarrhea and is associated with a considerable health and cost burden. However, there is still not a clear consensus on the best laboratory diagnosis approach and a wide variation of testing methods and strategies can be encountered. We aim to review the most practical aspects of CDI diagnosis providing our own view on how to optimize CDI diagnosis. Expert commentary: Laboratory diagnosis in search of C. difficile toxins should be applied to all fecal diarrheic samples reaching the microbiology laboratory in patients > 2 years old, with or without classic risk factors for CDI. Detection of toxins either directly in the fecal sample or in the bacteria isolated in culture confirm CDI in the proper clinical setting. Nuclear Acid Assay techniques (NAAT) allow to speed up the process with epidemiological and therapeutic consequences.

  15. Exploiting Quantum Resonance to Solve Combinatorial Problems

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Fijany, Amir

    2006-01-01

    Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.

  16. Higher-Order Extended Lagrangian Born-Oppenheimer Molecular Dynamics for Classical Polarizable Models.

    PubMed

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M N

    2018-02-13

    Generalized extended Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate "shadow" potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential to any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.

  17. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  18. Polynomial elimination theory and non-linear stability analysis for the Euler equations

    NASA Technical Reports Server (NTRS)

    Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.

    1986-01-01

    Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.

  19. Higher-Order Extended Lagrangian Born–Oppenheimer Molecular Dynamics for Classical Polarizable Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.

    Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filip, Radim; Marek, Petr; Fiurasek, Jaromir

    We analyze a reversibility of optimal Gaussian 1{yields}2 quantum cloning of a coherent state using only local operations on the clones and classical communication between them and propose a feasible experimental test of this feature. Performing Bell-type homodyne measurement on one clone and anticlone, an arbitrary unknown input state (not only a coherent state) can be restored in the other clone by applying appropriate local unitary displacement operation. We generalize this concept to a partial reversal of the cloning using only local operations and classical communication (LOCC) and we show that this procedure converts the symmetric cloner to an asymmetricmore » cloner. Further, we discuss a distributed LOCC reversal in optimal 1{yields}M Gaussian cloning of coherent states which transforms it to optimal 1{yields}M{sup '} cloning for M{sup '}

  1. Treatment regimens of classical and newer taxanes.

    PubMed

    Joerger, Markus

    2016-02-01

    The classical taxanes (paclitaxel, docetaxel), the newer taxane cabazitaxel and the nanoparticle-bound nab-paclitaxel are among the most widely used anticancer drugs. The taxanes share the characteristics of extensive hepatic metabolism and biliary excretion, the need for dose adaptation in patients with liver dysfunction, and a substantial pharmacokinetic variability even after taking into account known covariates. Data from clinical studies suggest that optimal scheduling of the taxanes is dependent not only on the specific taxane compound, but also on the tumor type and line of treatment. Still, the optimal dosing regimen (weekly vs 3 weekly) and optimal dose of the taxanes are controversial, as is the value of pharmacological personalization of taxane dosing. In this article, an overview is given on the pharmacological properties of the taxanes, including metabolism, pharmacokinetics-pharmacodynamics and aspects in the clinical use of taxanes. The latter includes the ongoing debate on the most active and safe regimen, the recommended initial dose and the issue of therapeutic drug dosing.

  2. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  3. Challenges and opportunities in the manufacture and expansion of cells for therapy.

    PubMed

    Maartens, Joachim H; De-Juan-Pardo, Elena; Wunner, Felix M; Simula, Antonio; Voelcker, Nicolas H; Barry, Simon C; Hutmacher, Dietmar W

    2017-10-01

    Laboratory-based ex vivo cell culture methods are largely manual in their manufacturing processes. This makes it extremely difficult to meet regulatory requirements for process validation, quality control and reproducibility. Cell culture concepts with a translational focus need to embrace a more automated approach where cell yields are able to meet the quantitative production demands, the correct cell lineage and phenotype is readily confirmed and reagent usage has been optimized. Areas covered: This article discusses the obstacles inherent in classical laboratory-based methods, their concomitant impact on cost-of-goods and that a technology step change is required to facilitate translation from bed-to-bedside. Expert opinion: While traditional bioreactors have demonstrated limited success where adherent cells are used in combination with microcarriers, further process optimization will be required to find solutions for commercial-scale therapies. New cell culture technologies based on 3D-printed cell culture lattices with favourable surface to volume ratios have the potential to change the paradigm in industry. An integrated Quality-by-Design /System engineering approach will be essential to facilitate the scaled-up translation from proof-of-principle to clinical validation.

  4. The Applications of Genetic Algorithms in Medicine.

    PubMed

    Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin

    2015-11-01

    A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.].

  5. The Applications of Genetic Algorithms in Medicine

    PubMed Central

    Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin

    2015-01-01

    A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.] PMID:26676060

  6. Design of off-statistics axial-flow fans by means of vortex law optimization

    NASA Astrophysics Data System (ADS)

    Lazari, Andrea; Cattanei, Andrea

    2014-12-01

    Off-statistics input data sets are common in axial-flow fans design and may easily result in some violation of the requirements of a good aerodynamic blade design. In order to circumvent this problem, in the present paper, a solution to the radial equilibrium equation is found which minimizes the outlet kinetic energy and fulfills the aerodynamic constraints, thus ensuring that the resulting blade has acceptable aerodynamic performance. The presented method is based on the optimization of a three-parameters vortex law and of the meridional channel size. The aerodynamic quantities to be employed as constraints are individuated and their suitable ranges of variation are proposed. The method is validated by means of a design with critical input data values and CFD analysis. Then, by means of systematic computations with different input data sets, some correlations and charts are obtained which are analogous to classic correlations based on statistical investigations on existing machines. Such new correlations help size a fan of given characteristics as well as study the feasibility of a given design.

  7. Real-time Collision Avoidance and Path Optimizer for Semi-autonomous UAVs.

    NASA Astrophysics Data System (ADS)

    Hawary, A. F.; Razak, N. A.

    2018-05-01

    Whilst UAV offers a potentially cheaper and more localized observation platform than current satellite or land-based approaches, it requires an advance path planner to reveal its true potential, particularly in real-time missions. Manual control by human will have limited line-of-sights and prone to errors due to careless and fatigue. A good alternative solution is to equip the UAV with semi-autonomous capabilities that able to navigate via a pre-planned route in real-time fashion. In this paper, we propose an easy-and-practical path optimizer based on the classical Travelling Salesman Problem and adopts a brute force search method to re-optimize the route in the event of collisions using range finder sensor. The former utilizes a Simple Genetic Algorithm and the latter uses Nearest Neighbour algorithm. Both algorithms are combined to optimize the route and avoid collision at once. Although many researchers proposed various path planning algorithms, we find that it is difficult to integrate on a basic UAV model and often lacks of real-time collision detection optimizer. Therefore, we explore a practical benefit from this approach using on-board Arduino and Ardupilot controllers by manually emulating the motion of an actual UAV model prior to test on the flying site. The result showed that the range finder sensor provides a real-time data to the algorithm to find a collision-free path and eventually optimized the route successfully.

  8. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  9. Integrating NOE and RDC using sum-of-squares relaxation for protein structure determination.

    PubMed

    Khoo, Y; Singer, A; Cowburn, D

    2017-07-01

    We revisit the problem of protein structure determination from geometrical restraints from NMR, using convex optimization. It is well-known that the NP-hard distance geometry problem of determining atomic positions from pairwise distance restraints can be relaxed into a convex semidefinite program (SDP). However, often the NOE distance restraints are too imprecise and sparse for accurate structure determination. Residual dipolar coupling (RDC) measurements provide additional geometric information on the angles between atom-pair directions and axes of the principal-axis-frame. The optimization problem involving RDC is highly non-convex and requires a good initialization even within the simulated annealing framework. In this paper, we model the protein backbone as an articulated structure composed of rigid units. Determining the rotation of each rigid unit gives the full protein structure. We propose solving the non-convex optimization problems using the sum-of-squares (SOS) hierarchy, a hierarchy of convex relaxations with increasing complexity and approximation power. Unlike classical global optimization approaches, SOS optimization returns a certificate of optimality if the global optimum is found. Based on the SOS method, we proposed two algorithms-RDC-SOS and RDC-NOE-SOS, that have polynomial time complexity in the number of amino-acid residues and run efficiently on a standard desktop. In many instances, the proposed methods exactly recover the solution to the original non-convex optimization problem. To the best of our knowledge this is the first time SOS relaxation is introduced to solve non-convex optimization problems in structural biology. We further introduce a statistical tool, the Cramér-Rao bound (CRB), to provide an information theoretic bound on the highest resolution one can hope to achieve when determining protein structure from noisy measurements using any unbiased estimator. Our simulation results show that when the RDC measurements are corrupted by Gaussian noise of realistic variance, both SOS based algorithms attain the CRB. We successfully apply our method in a divide-and-conquer fashion to determine the structure of ubiquitin from experimental NOE and RDC measurements obtained in two alignment media, achieving more accurate and faster reconstructions compared to the current state of the art.

  10. Robust linear discriminant analysis with distance based estimators

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  11. Hybrid Quantum Mechanics/Molecular Mechanics Solvation Scheme for Computing Free Energies of Reactions at Metal-Water Interfaces.

    PubMed

    Faheem, Muhammad; Heyden, Andreas

    2014-08-12

    We report the development of a quantum mechanics/molecular mechanics free energy perturbation (QM/MM-FEP) method for modeling chemical reactions at metal-water interfaces. This novel solvation scheme combines planewave density function theory (DFT), periodic electrostatic embedded cluster method (PEECM) calculations using Gaussian-type orbitals, and classical molecular dynamics (MD) simulations to obtain a free energy description of a complex metal-water system. We derive a potential of mean force (PMF) of the reaction system within the QM/MM framework. A fixed-size, finite ensemble of MM conformations is used to permit precise evaluation of the PMF of QM coordinates and its gradient defined within this ensemble. Local conformations of adsorbed reaction moieties are optimized using sequential MD-sampling and QM-optimization steps. An approximate reaction coordinate is constructed using a number of interpolated states and the free energy difference between adjacent states is calculated using the QM/MM-FEP method. By avoiding on-the-fly QM calculations and by circumventing the challenges associated with statistical averaging during MD sampling, a computational speedup of multiple orders of magnitude is realized. The method is systematically validated against the results of ab initio QM calculations and demonstrated for C-C cleavage in double-dehydrogenated ethylene glycol on a Pt (111) model surface.

  12. A finite-element toolbox for the stationary Gross-Pitaevskii equation with rotation

    NASA Astrophysics Data System (ADS)

    Vergez, Guillaume; Danaila, Ionut; Auliac, Sylvain; Hecht, Frédéric

    2016-12-01

    We present a new numerical system using classical finite elements with mesh adaptivity for computing stationary solutions of the Gross-Pitaevskii equation. The programs are written as a toolbox for FreeFem++ (www.freefem.org), a free finite-element software available for all existing operating systems. This offers the advantage to hide all technical issues related to the implementation of the finite element method, allowing to easily code various numerical algorithms. Two robust and optimized numerical methods were implemented to minimize the Gross-Pitaevskii energy: a steepest descent method based on Sobolev gradients and a minimization algorithm based on the state-of-the-art optimization library Ipopt. For both methods, mesh adaptivity strategies are used to reduce the computational time and increase the local spatial accuracy when vortices are present. Different run cases are made available for 2D and 3D configurations of Bose-Einstein condensates in rotation. An optional graphical user interface is also provided, allowing to easily run predefined cases or with user-defined parameter files. We also provide several post-processing tools (like the identification of quantized vortices) that could help in extracting physical features from the simulations. The toolbox is extremely versatile and can be easily adapted to deal with different physical models.

  13. A simplified version of the total Kjeldahl nitrogen method using an ammonia extraction ultrasound-assisted purge-and-trap system and ion chromatography for analyses of geological samples.

    PubMed

    Pontes, Fernanda V M; Carneiro, Manuel C; Vaitsman, Delmo S; da Rocha, Genilda P; da Silva, Lílian I D; Neto, Arnaldo A; Monteiro, Maria Inês C

    2009-01-26

    The total Kjeldahl nitrogen (TKN) method was simplified by using a manifold connected to a purge-and-trap system immersed into an ultrasonic (US) bath for simultaneous ammonia (NH(3)) extraction from many previously digested samples. Then, ammonia was collected in an acidic solution, converted to ammonium (NH(4)(+)), and finally determined by ion chromatography method. Some variables were optimized, such as ultrasonic irradiation power and frequency, ultrasound-assisted NH(3) extraction time, NH(4)(+) mass and sulfuric acid concentration added to the NH(3) collector flask. Recovery tests revealed no changes in the pH values and no conversion of NH(4)(+) into other nitrogen species during the irradiation of NH(4)Cl solutions with 25 or 40 kHz ultrasonic waves for up to 20 min. Sediment and oil free sandstone samples and soil certified reference materials (NCS DC 73319, NCS DC 73321 and NCS DC 73326) with different total nitrogen concentrations were analysed. The proposed method is faster, simpler and more sensitive than the classical Kjeldahl steam distillation method. The time for NH(3) extraction by the US-assisted purge-and-trap system (20 min) was half of that by the Kjeldahl steam distillation (40 min) for 10 previously digested samples. The detection limit was 9 microg g(-1)N, while for the Kjeldahl classical/indophenol method was 58 microg g(-1)N. Precision was always better than 13%. In the proposed method, carcinogenic reagents are not used, contrarily to the indophenol method. Furthermore, the proposed method can be adapted for fixed-NH(4)(+) determination.

  14. Nested-PCR real time as alternative molecular tool for detection of Borrelia burgdorferi compared to the classical serological diagnosis of the blood.

    PubMed

    Sroka-Oleksiak, Agnieszka; Ufir, Krzysztof; Salamon, Dominika; Bulanda, Malgorzata; Gosiewski, Tomasz

    Lyme disease, caused by Borrelia burgdorferi, is a multisystem disease that often makes difficulties to recognize caused by their genetic heterogenity. Currently, the gold standard for the detection of Lyme disease (LD) is serologic diagnostics based mainly on tests: ELISA and Western blot (WB). These methods, however, are subject to consider- able defect, especially in the initial phase of infection due to the occurrence of so-called serological window period and low specificity. For this reason, they might be replaced by molecular methods, for example polymerase chain reaction (PCR), which should be more sensitivity and specificity. In the present study we attempt to optimize the PCR reaction conditions and enhance existing test sensitivity by applying the equivalent of real time PCR - nested PCR for detection B. burgdorferi DNA in the patient's blood. The study involved 94 blood samples of patients with suspected LD. From each sample, 1.5 ml of blood was used for the isolation of bacterial DNA and PCR real time am- plification and its equivalent, in nested version. The remaining part earmarked for serologi- cal testing. Optimization of the reaction conditions made experimentally, using gradient of the temperature and gradient of the magnesium ions concentration for reaction real time in nested-PCR and PCR version. The results show that the nested-PCR real time, has a much higher sensitivity 45 (47.8%) of positive results for the detection of B. burgdorferi compared to the single- variety, without a preceding pre-amplification 2 (2.1%). Serological methods allowed the detection of infection in 41 (43.6%) samples. These results support of the nested PCR method as a better molecular tool for the detection of B. burgdorferi infection than classical PCR real time reaction. The nested-PCR real time method may be considered as a complement to ELISA and WB mainly in the early stages of infection, when in the blood circulating B. burgdorferi cells. By contrast, the results of serological and molecular tests should always be carried out tak- ing into account the patient's clinical status.

  15. SCGICAR: Spatial concatenation based group ICA with reference for fMRI data analysis.

    PubMed

    Shi, Yuhu; Zeng, Weiming; Wang, Nizhuan

    2017-09-01

    With the rapid development of big data, the functional magnetic resonance imaging (fMRI) data analysis of multi-subject is becoming more and more important. As a kind of blind source separation technique, group independent component analysis (GICA) has been widely applied for the multi-subject fMRI data analysis. However, spatial concatenated GICA is rarely used compared with temporal concatenated GICA due to its disadvantages. In this paper, in order to overcome these issues and to consider that the ability of GICA for fMRI data analysis can be improved by adding a priori information, we propose a novel spatial concatenation based GICA with reference (SCGICAR) method to take advantage of the priori information extracted from the group subjects, and then the multi-objective optimization strategy is used to implement this method. Finally, the post-processing means of principal component analysis and anti-reconstruction are used to obtain group spatial component and individual temporal component in the group, respectively. The experimental results show that the proposed SCGICAR method has a better performance on both single-subject and multi-subject fMRI data analysis compared with classical methods. It not only can detect more accurate spatial and temporal component for each subject of the group, but also can obtain a better group component on both temporal and spatial domains. These results demonstrate that the proposed SCGICAR method has its own advantages in comparison with classical methods, and it can better reflect the commonness of subjects in the group. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Automation of Classical QEEG Trending Methods for Early Detection of Delayed Cerebral Ischemia: More Work to Do.

    PubMed

    Wickering, Ellis; Gaspard, Nicolas; Zafar, Sahar; Moura, Valdery J; Biswal, Siddharth; Bechek, Sophia; OʼConnor, Kathryn; Rosenthal, Eric S; Westover, M Brandon

    2016-06-01

    The purpose of this study is to evaluate automated implementations of continuous EEG monitoring-based detection of delayed cerebral ischemia based on methods used in classical retrospective studies. We studied 95 patients with either Fisher 3 or Hunt Hess 4 to 5 aneurysmal subarachnoid hemorrhage who were admitted to the Neurosciences ICU and underwent continuous EEG monitoring. We implemented several variations of two classical algorithms for automated detection of delayed cerebral ischemia based on decreases in alpha-delta ratio and relative alpha variability. Of 95 patients, 43 (45%) developed delayed cerebral ischemia. Our automated implementation of the classical alpha-delta ratio-based trending method resulted in a sensitivity and specificity (Se,Sp) of (80,27)%, compared with the values of (100,76)% reported in the classic study using similar methods in a nonautomated fashion. Our automated implementation of the classical relative alpha variability-based trending method yielded (Se,Sp) values of (65,43)%, compared with (100,46)% reported in the classic study using nonautomated analysis. Our findings suggest that improved methods to detect decreases in alpha-delta ratio and relative alpha variability are needed before an automated EEG-based early delayed cerebral ischemia detection system is ready for clinical use.

  17. Comparison of four extraction/methylation analytical methods to measure fatty acid composition by gas chromatography in meat.

    PubMed

    Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S

    2008-05-09

    Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.

  18. Structural optimization of dental restorations using the principle of adaptive growth.

    PubMed

    Couegnat, Guillaume; Fok, Siu L; Cooper, Jonathan E; Qualtrough, Alison J E

    2006-01-01

    In a restored tooth, the stresses that occur at the tooth-restoration interface during loading could become large enough to fracture the tooth and/or restoration and it has been estimated that 92% of fractured teeth have been previously restored. The tooth preparation process for a dental restoration is a classical optimization problem: tooth reduction must be minimized to preserve tooth tissue whilst stress levels must be kept low to avoid fracture of the restored unit. The objective of the present study was to derive alternative optimized designs for a second upper premolar cavity preparation by means of structural shape optimization based on the finite element method and biological adaptive growth. Three models of cavity preparations were investigated: an inlay design for preparation of a premolar tooth, an undercut cavity design and an onlay preparation. Three restorative materials and several tooth/restoration contact conditions were utilized to replicate the in vitro situation as closely as possible. The optimization process was run for each cavity geometry. Mathematical shape optimization based on biological adaptive growth process was successfully applied to tooth preparations for dental restorations. Significant reduction in stress levels at the tooth-restoration interface where bonding is imperfect was achieved using optimized cavity or restoration shapes. In the best case, the maximum stress value was reduced by more than 50%. Shape optimization techniques can provide an efficient and effective means of reducing the stresses in restored teeth and hence has the potential of prolonging their service lives. The technique can easily be adopted for optimizing other dental restorations.

  19. Wing box transonic-flutter suppression using piezoelectric self-sensing actuators attached to skin

    NASA Astrophysics Data System (ADS)

    Otiefy, R. A. H.; Negm, H. M.

    2010-12-01

    The main objective of this research is to study the capability of piezoelectric (PZT) self-sensing actuators to suppress the transonic wing box flutter, which is a flow-structure interaction phenomenon. The unsteady general frequency modified transonic small disturbance (TSD) equation is used to model the transonic flow about the wing. The wing box structure and piezoelectric actuators are modeled using the equivalent plate method, which is based on the first order shear deformation plate theory (FSDPT). The piezoelectric actuators are bonded to the skin. The optimal electromechanical coupling conditions between the piezoelectric actuators and the wing are collected from previous work. Three main different control strategies, a linear quadratic Gaussian (LQG) which combines the linear quadratic regulator (LQR) with the Kalman filter estimator (KFE), an optimal static output feedback (SOF), and a classic feedback controller (CFC), are studied and compared. The optimum actuator and sensor locations are determined using the norm of feedback control gains (NFCG) and norm of Kalman filter estimator gains (NKFEG) respectively. A genetic algorithm (GA) optimization technique is used to calculate the controller and estimator parameters to achieve a target response.

  20. Active subspace: toward scalable low-rank learning.

    PubMed

    Liu, Guangcan; Yan, Shuicheng

    2012-12-01

    We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.

  1. Hunters' acceptability of the surveillance system and alternative surveillance strategies for classical swine fever in wild boar - a participatory approach.

    PubMed

    Schulz, Katja; Calba, Clémentine; Peyre, Marisa; Staubach, Christoph; Conraths, Franz J

    2016-09-06

    Surveillance measures can only be effective if key players in the system accept them. Acceptability, which describes the willingness of persons to contribute, is often analyzed using participatory methods. Participatory epidemiology enables the active involvement of key players in the assessment of epidemiological issues. In the present study, we used a participatory method recently developed by CIRAD (Centre de Coopération Internationale en Recherche Agronomique pour le Développement) to evaluate the functionality and acceptability of Classical Swine Fever (CSF) surveillance in wild boar in Germany, which is highly dependent on the participation of hunters. The acceptability of alternative surveillance strategies was also analyzed. By conducting focus group discussions, potential vulnerabilities in the system were detected and feasible alternative surveillance strategies identified. Trust in the current surveillance system is high, whereas the acceptability of the operation of the system is medium. Analysis of the acceptability of alternative surveillance strategies showed how risk-based surveillance approaches can be combined to develop strategies that have sufficient support and functionality. Furthermore, some surveillance strategies were clearly rejected by the hunters. Thus, the implementation of such strategies may be difficult. Participatory methods can be used to evaluate the functionality and acceptability of existing surveillance plans for CSF among hunters and to optimize plans regarding their chances of successful implementation.

  2. A quantum–quantum Metropolis algorithm

    PubMed Central

    Yung, Man-Hong; Aspuru-Guzik, Alán

    2012-01-01

    The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584

  3. A Synthetic Approach to the Transfer Matrix Method in Classical and Quantum Physics

    ERIC Educational Resources Information Center

    Pujol, O.; Perez, J. P.

    2007-01-01

    The aim of this paper is to propose a synthetic approach to the transfer matrix method in classical and quantum physics. This method is an efficient tool to deal with complicated physical systems of practical importance in geometrical light or charged particle optics, classical electronics, mechanics, electromagnetics and quantum physics. Teaching…

  4. Perturbational and nonperturbational inversion of Rayleigh-wave velocities

    USGS Publications Warehouse

    Haney, Matt; Tsai, Victor C.

    2017-01-01

    The inversion of Rayleigh-wave dispersion curves is a classic geophysical inverse problem. We have developed a set of MATLAB codes that performs forward modeling and inversion of Rayleigh-wave phase or group velocity measurements. We describe two different methods of inversion: a perturbational method based on finite elements and a nonperturbational method based on the recently developed Dix-type relation for Rayleigh waves. In practice, the nonperturbational method can be used to provide a good starting model that can be iteratively improved with the perturbational method. Although the perturbational method is well-known, we solve the forward problem using an eigenvalue/eigenvector solver instead of the conventional approach of root finding. Features of the codes include the ability to handle any mix of phase or group velocity measurements, combinations of modes of any order, the presence of a surface water layer, computation of partial derivatives due to changes in material properties and layer boundaries, and the implementation of an automatic grid of layers that is optimally suited for the depth sensitivity of Rayleigh waves.

  5. The Soda Can Optimization Problem: Getting Close to the Real Thing

    ERIC Educational Resources Information Center

    Premadasa, Kirthi; Martin, Paul; Sprecher, Bryce; Yang, Lai; Dodge, Noah-Helen

    2016-01-01

    Optimizing the dimensions of a soda can is a classic problem that is frequently posed to freshman calculus students. However, if we only minimize the surface area subject to a fixed volume, the result is a can with a square edge-on profile, and this differs significantly from actual cans. By considering a more realistic model for the can that…

  6. Learning With Mixed Hard/Soft Pointwise Constraints.

    PubMed

    Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello

    2015-09-01

    A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.

  7. Topometry optimization of sheet metal structures for crashworthiness design using hybrid cellular automata

    NASA Astrophysics Data System (ADS)

    Mozumder, Chandan K.

    The objective in crashworthiness design is to generate plastically deformable energy absorbing structures which can satisfy the prescribed force-displacement (FD) response. The FD behavior determines the reaction force, displacement and the internal energy that the structure should withstand. However, attempts to include this requirement in structural optimization problems remain scarce. The existing commercial optimization tools utilize models under static loading conditions because of the complexities associated with dynamic/impact loading. Due to the complexity of a crash event and the consequent time required to numerically analyze the dynamic response of the structure, classical methods (i.e., gradient-based and direct) are not well developed to solve this undertaking. This work presents an approach under the framework of the hybrid cellular automaton (HCA) method to solve the above challenge. The HCA method has been successfully applied to nonlinear transient topology optimization for crashworthiness design. In this work, the HCA algorithm has been utilized to develop an efficient methodology for synthesizing shell-based sheet metal structures with optimal material thickness distribution under a dynamic loading event using topometry optimization. This method utilizes the cellular automata (CA) computing paradigm and nonlinear transient finite element analysis (FEA) via ls-dyna. In this method, a set field variables is driven to their target states by changing a convenient set of design variables (e.g., thickness). These rules operate locally in cells within a lattice that only know local conditions. The field variables associated with the cells are driven to a setpoint to obtain the desired structure. This methodology is used to design for structures with controlled energy absorption with specified buckling zones. The peak reaction force and the maximum displacement are also constrained to meet the desired safety level according to passenger safety regulations. Design for prescribed FD response by minimizing the error between the actual response and desired FD curve is implemented. With the use of HCA rules, manufacturability constraints (e.g., rolling) and structures which can be manufactured by special techniques, such as, tailor-welded blanks (TWB), have also been implemented. This methodology is applied to shock-absorbing structural components for passengers in a crashing vehicle. These results are compared to previous designs showing the benefits of the method introduced in this work.

  8. Optimal protocols for slowly driven quantum systems.

    PubMed

    Zulkowski, Patrick R; DeWeese, Michael R

    2015-09-01

    The design of efficient quantum information processing will rely on optimal nonequilibrium transitions of driven quantum systems. Building on a recently developed geometric framework for computing optimal protocols for classical systems driven in finite time, we construct a general framework for optimizing the average information entropy for driven quantum systems. Geodesics on the parameter manifold endowed with a positive semidefinite metric correspond to protocols that minimize the average information entropy production in finite time. We use this framework to explicitly compute the optimal entropy production for a simple two-state quantum system coupled to a heat bath of bosonic oscillators, which has applications to quantum annealing.

  9. Shell-vial culture, coupled with real-time PCR, applied to Rickettsia conorii and Rickettsia massiliae-Bar29 detection, improving the diagnosis of the Mediterranean spotted fever.

    PubMed

    Segura, Ferran; Pons, Immaculada; Sanfeliu, Isabel; Nogueras, María-Mercedes

    2016-04-01

    Rickettsia conorii and Rickettsia massiliae-Bar29 are related to Mediterranean spotted fever (MSF). They are intracellular microorganisms. The Shell-vial culture assay (SV) improved Rickettsia culture but it still has some limitations: blood usually contains low amount of microorganisms and the samples that contain the highest amount of them are non-sterile. The objectives of this study were to optimize SV culture conditions and monitoring methods and to establish antibiotic concentrations useful for non-sterile samples. 12 SVs were inoculated with each microorganism, incubated at different temperatures and monitored by classical methods and real-time PCR. R. conorii was detected by all methods at all temperatures since 7th day of incubation. R. massiliae-Bar29 was firstly observed at 28°C. Real-time PCR allowed to detected it 2-7 days earlier (depend on temperature) than classical methods. Antibiotics concentration needed for the isolation of these Rickettsia species from non-sterile samples was determined inoculating SV with R. conorii, R. massiliae-Bar29, biopsy or tick, incubating them with different dilutions of antibiotics and monitoring them weekly. To sum up, if a MSF diagnosis is suspected, SV should be incubated at both 28°C and 32°C for 1-3 weeks and monitored by a sensitive real-time PCR. If the sample is non-sterile the panel of antibiotics tested can be added. Copyright © 2016 Elsevier GmbH. All rights reserved.

  10. Non linear predictive control of a LEGO mobile robot

    NASA Astrophysics Data System (ADS)

    Merabti, H.; Bouchemal, B.; Belarbi, K.; Boucherma, D.; Amouri, A.

    2014-10-01

    Metaheuristics are general purpose heuristics which have shown a great potential for the solution of difficult optimization problems. In this work, we apply the meta heuristic, namely particle swarm optimization, PSO, for the solution of the optimization problem arising in NLMPC. This algorithm is easy to code and may be considered as alternatives for the more classical solution procedures. The PSO- NLMPC is applied to control a mobile robot for the tracking trajectory and obstacles avoidance. Experimental results show the strength of this approach.

  11. Linear Quantum Systems: Non-Classical States and Robust Stability

    DTIC Science & Technology

    2016-06-29

    has a history going back some 50 years, to the birth of modern control theory with Kalman’s foundational work on filtering and LQG optimal control ...information   if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION. 1. REPORT DATE (DD...analysis and control of quantum linear systems and their interactions with non-classical quantum fields by developing control theoretic concepts exploiting

  12. [An assessment of the functional status in the neurorehabilitation of patients after ischemic stroke].

    PubMed

    Klimkiewicz, Paulina; Klimkiewicz, Robert; Jankowska, Agnieszka; Kubsik, Anna; Widłak, Patrycja; Łukasiak, Adam; Janczewska, Katarzyna; Kociuga, Natalia; Nowakowski, Tomasz; Woldańska-Okońska, Marta

    2018-01-01

    Introduction: In this article, the authors focused on the symptoms of ischemic stroke and the effect of neurorehabilitation methods on the functional status of patients after ischemic stroke. The aim of the study was to evaluate and compare the functional status of patients after ischemic stroke with improved classic kinesiotherapy, classic kinesiotherapy and NDT-Bobath and classic kinesiotherapy and PNF. Materials and methods: The study involved 120 patients after ischemic stroke. Patients were treated in the Department of Rehabilitation and Physical Medicine USK of Medical University in Lodz. Patients were divided into 3 groups of 40 people. Group 1 was rehabilitated by classical kinesiotherapy. Group 2 was rehabilitated by classic kinesiotherapy and NTD-Bobath. Group 3 was rehabilitated by classical kinesiotherapy and PNF. In all patient groups, magnetostimulation was performed using the Viofor JPS System. The study was conducted twice: before treatment and immediately after 5 weeks after the therapy. The effects of applied neurorehabilitation methods were assessed on the basis of the Rivermead Motor Assessment (RMA). Results: In all three patient groups, functional improvement was achieved. However, a significantly higher improvement was observed in patients in the second group, enhanced with classical kinesitherapy and NDT-Bobath. Conclusions: The use of classical kinesiotherapy combined with the NDT-Bobath method is noticeably more effective in improving functional status than the use only classical kinesiotherapy or combination of classical kinesiotherapy and PNF patients after ischemic stroke.

  13. Curvature sensor for ocular wavefront measurement.

    PubMed

    Díaz-Doutón, Fernando; Pujol, Jaume; Arjona, Montserrat; Luque, Sergio O

    2006-08-01

    We describe a new wavefront sensor for ocular aberration determination, based on the curvature sensing principle, which adapts the classical system used in astronomy for the living eye's measurements. The actual experimental setup is presented and designed following a process guided by computer simulations to adjust the design parameters for optimal performance. We present results for artificial and real young eyes, compared with the Hartmann-Shack estimations. Both methods show a similar performance for these cases. This system will allow for the measurement of higher order aberrations than the currently used wavefront sensors in situations in which they are supposed to be significant, such as postsurgery eyes.

  14. Fundamentals and techniques of nonimaging optics for solar energy concentration

    NASA Astrophysics Data System (ADS)

    Winston, R.; Gallagher, J. J.

    1980-05-01

    The properties of a variety of new and previously known nonimaging optical configurations were investigated. A thermodynamic model which explains quantitatively the enhancement of effective absorptance of gray body receivers through cavity effects was developed. The classic method of Liu and Jordan, which allows one to predict the diffuse sunlight levels through correlation with the total and direct fraction was revised and updated and applied to predict the performance of nonimaging solar collectors. The conceptual design for an optimized solar collector which integrates the techniques of nonimaging concentration with evacuated tube collector technology was carried out and is presently the basis for a separately funded hardware development project.

  15. Rapid recipe formulation for plasma etching of new materials

    NASA Astrophysics Data System (ADS)

    Chopra, Meghali; Zhang, Zizhuo; Ekerdt, John; Bonnecaze, Roger T.

    2016-03-01

    A fast and inexpensive scheme for etch rate prediction using flexible continuum models and Bayesian statistics is demonstrated. Bulk etch rates of MgO are predicted using a steady-state model with volume-averaged plasma parameters and classical Langmuir surface kinetics. Plasma particle and surface kinetics are modeled within a global plasma framework using single component Metropolis Hastings methods and limited data. The accuracy of these predictions is evaluated with synthetic and experimental etch rate data for magnesium oxide in an ICP-RIE system. This approach is compared and superior to factorial models generated from JMP, a software package frequently employed for recipe creation and optimization.

  16. The Propulsive-Only Flight Control Problem

    NASA Technical Reports Server (NTRS)

    Blezad, Daniel J.

    1996-01-01

    Attitude control of aircraft using only the throttles is investigated. The long time constants of both the engines and of the aircraft dynamics, together with the coupling between longitudinal and lateral aircraft modes make piloted flight with failed control surfaces hazardous, especially when attempting to land. This research documents the results of in-flight operation using simulated failed flight controls and ground simulations of piloted propulsive-only control to touchdown. Augmentation control laws to assist the pilot are described using both optimal control and classical feedback methods. Piloted simulation using augmentation shows that simple and effective augmented control can be achieved in a wide variety of failed configurations.

  17. New trends and affinity tag designs for recombinant protein purification.

    PubMed

    Wood, David W

    2014-06-01

    Engineered purification tags can facilitate very efficient purification of recombinant proteins, resulting in high yields and purities in a few standard steps. Over the years, many different purification tags have been developed, including short peptides, epitopes, folded protein domains, non-chromatographic tags and more recently, compound multifunctional tags with optimized capabilities. Although classic proteases are still primarily used to remove the tags from target proteins, new self-cleaving methods are gaining traction as a highly convenient alternative. In this review, we discuss some of these emerging trends, and examine their potential impacts and remaining challenges in recombinant protein research. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Application of quasi-distributions for solving inverse problems of neutron and {gamma}-ray transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogosbekyan, L.R.; Lysov, D.A.

    The considered inverse problems deal with the calculation of the unknown values of nuclear installations by means of the known (goal) functionals of neutron/{gamma}-ray distributions. The example of these problems might be the calculation of the automatic control rods position as function of neutron sensors reading, or the calculation of experimentally-corrected values of cross-sections, isotopes concentration, fuel enrichment via the measured functional. The authors have developed the new method to solve inverse problem. It finds flux density as quasi-solution of the particles conservation linear system adjointed to equalities for functionals. The method is more effective compared to the one basedmore » on the classical perturbation theory. It is suitable for vectorization and it can be used successfully in optimization codes.« less

  19. A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1993-01-01

    A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.

  20. Portfolio Analysis for Vector Calculus

    ERIC Educational Resources Information Center

    Kaplan, Samuel R.

    2015-01-01

    Classic stock portfolio analysis provides an applied context for Lagrange multipliers that undergraduate students appreciate. Although modern methods of portfolio analysis are beyond the scope of vector calculus, classic methods reinforce the utility of this material. This paper discusses how to introduce classic stock portfolio analysis in a…

  1. What is unrealistic optimism?

    PubMed

    Jefferson, Anneli; Bortolotti, Lisa; Kuzmanovic, Bojana

    2017-04-01

    Here we consider the nature of unrealistic optimism and other related positive illusions. We are interested in whether cognitive states that are unrealistically optimistic are belief states, whether they are false, and whether they are epistemically irrational. We also ask to what extent unrealistically optimistic cognitive states are fixed. Based on the classic and recent empirical literature on unrealistic optimism, we offer some preliminary answers to these questions, thereby laying the foundations for answering further questions about unrealistic optimism, such as whether it has biological, psychological, or epistemic benefits. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Correction of differential renal function for asymmetric renal area ratio in unilateral hydronephrosis.

    PubMed

    Aktaş, Gul Ege; Sarıkaya, Ali

    2015-11-01

    Children with unilateral hydronephrosis are followed up with anteroposterior pelvic diameter (APD), hydronephrosis grade, mercaptoacetyltriglycine (MAG-3) drainage pattern and differential renal function (DRF). Indeterminate drainage preserved DRF in higher grades of hydronephrosis, in some situations, complicating the decision-making process. Due to an asymmetric renal area ratio, falsely negative DRF estimations can result in missed optimal surgery times. This study was designed to assess whether correcting the DRF estimation according to kidney area could reflect the clinical situation of a hydronephrotic kidney better than a classical DRF calculation, concurrently with the hydronephrosis grade, APD and MAG-3 drainage pattern. We reviewed the MAG-3, dimercaptosuccinic acid (DMSA) scans and ultrasonography (US) of 23 children (6 girls, 17 boys, mean age: 29 ± 50 months) with unilateral hydronephrosis. MAG-3 and DMSA scans were performed within 3 months (mean 25.4 ± 30.7 days). The closest US findings (mean 41.5 ± 28.2 days) were used. DMSA DRF estimations were obtained using the geometric mean method. Secondary calculations were performed to correct the counts (the total counts divided by the number of pixels in ROI) according to kidney area. The renogram patterns of patients were evaluated and separated into subgroups. The visual assessment of DMSA scans was noted and the hydronephrotic kidney was classified in comparison to the normal contralateral kidney's uptake. The correlations of the DRF values of classical and area-corrected methods with MAG-3 renogram patterns, the visual classification of DMSA scan, the hydronephrosis grade and the APD were assessed. DRF estimations of two methods were statistically different (p: 0.001). The categories of 12 hydronephrotic kidneys were changed. There were no correlations between classical DRF estimations and the hydronephrosis grade, APD, visual classification of the DMSA scan and uptake evaluation. The DRF distributions according to MAG-3 drainage patterns were not different. Area-corrected DRF estimations correlated with all: with an increasing hydronephrosis grade and APD, DRF estimations decreased and MAG-3 drainage patterns worsened. A decrease in DRF (< 45 %) was determined when APD was ≥ 10 mm. When APD was ≥ 26 mm, a reduction of DRF below 40 % was determined. Our results suggest that correcting DRF estimation for asymmetric renal area ratio in unilateral hydronephrosis can be more robust than the classical method, especially for higher grades of hydronephrotic kidneys, under equivocal circumstances.

  3. A Classical Based Derivation of Time Dilation Providing First Order Accuracy to Schwarzschild's Solution of Einstein's Field Equations

    NASA Astrophysics Data System (ADS)

    Austin, Rickey W.

    In Einstein's theory of Special Relativity (SR), one method to derive relativistic kinetic energy is via applying the classical work-energy theorem to relativistic momentum. This approach starts with a classical based work-energy theorem and applies SR's momentum to the derivation. One outcome of this derivation is relativistic kinetic energy. From this derivation, it is rather straight forward to form a kinetic energy based time dilation function. In the derivation of General Relativity a common approach is to bypass classical laws as a starting point. Instead a rigorous development of differential geometry and Riemannian space is constructed, from which classical based laws are derived. This is in contrast to SR's approach of starting with classical laws and applying the consequences of the universal speed of light by all observers. A possible method to derive time dilation due to Newtonian gravitational potential energy (NGPE) is to apply SR's approach to deriving relativistic kinetic energy. It will be shown this method gives a first order accuracy compared to Schwarzschild's metric. The SR's kinetic energy and the newly derived NGPE derivation are combined to form a Riemannian metric based on these two energies. A geodesic is derived and calculations compared to Schwarzschild's geodesic for an orbiting test mass about a central, non-rotating, non-charged massive body. The new metric results in high accuracy calculations when compared to Einsteins General Relativity's prediction. The new method provides a candidate approach for starting with classical laws and deriving General Relativity effects. This approach mimics SR's method of starting with classical mechanics when deriving relativistic equations. As a compliment to introducing General Relativity, it provides a plausible scaffolding method from classical physics when teaching introductory General Relativity. A straight forward path from classical laws to General Relativity will be derived. This derivation provides a minimum first order accuracy to Schwarzschild's solution to Einstein's field equations.

  4. Assessing learning outcomes in middle-division classical mechanics: The Colorado Classical Mechanics and Math Methods Instrument

    NASA Astrophysics Data System (ADS)

    Caballero, Marcos D.; Doughty, Leanne; Turnbull, Anna M.; Pepper, Rachel E.; Pollock, Steven J.

    2017-06-01

    Reliable and validated assessments of introductory physics have been instrumental in driving curricular and pedagogical reforms that lead to improved student learning. As part of an effort to systematically improve our sophomore-level classical mechanics and math methods course (CM 1) at CU Boulder, we have developed a tool to assess student learning of CM 1 concepts in the upper division. The Colorado Classical Mechanics and Math Methods Instrument (CCMI) builds on faculty consensus learning goals and systematic observations of student difficulties. The result is a 9-question open-ended post test that probes student learning in the first half of a two-semester classical mechanics and math methods sequence. In this paper, we describe the design and development of this instrument, its validation, and measurements made in classes at CU Boulder and elsewhere.

  5. Performance of the air2stream model that relates air and stream water temperatures depends on the calibration method

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jaroslaw J.

    2018-06-01

    A number of physical or data-driven models have been proposed to evaluate stream water temperatures based on hydrological and meteorological observations. However, physical models require a large amount of information that is frequently unavailable, while data-based models ignore the physical processes. Recently the air2stream model has been proposed as an intermediate alternative that is based on physical heat budget processes, but it is so simplified that the model may be applied like data-driven ones. However, the price for simplicity is the need to calibrate eight parameters that, although have some physical meaning, cannot be measured or evaluated a priori. As a result, applicability and performance of the air2stream model for a particular stream relies on the efficiency of the calibration method. The original air2stream model uses an inefficient 20-year old approach called Particle Swarm Optimization with inertia weight. This study aims at finding an effective and robust calibration method for the air2stream model. Twelve different optimization algorithms are examined on six different streams from northern USA (states of Washington, Oregon and New York), Poland and Switzerland, located in both high mountains, hilly and lowland areas. It is found that the performance of the air2stream model depends significantly on the calibration method. Two algorithms lead to the best results for each considered stream. The air2stream model, calibrated with the chosen optimization methods, performs favorably against classical streamwater temperature models. The MATLAB code of the air2stream model and the chosen calibration procedure (CoBiDE) are available as Supplementary Material on the Journal of Hydrology web page.

  6. Can specific transcriptional regulators assemble a universal cancer signature?

    NASA Astrophysics Data System (ADS)

    Roy, Janine; Isik, Zerrin; Pilarsky, Christian; Schroeder, Michael

    2013-10-01

    Recently, there is a lot of interest in using biomarker signatures derived from gene expression data to predict cancer progression. We assembled signatures of 25 published datasets covering 13 types of cancers. How do these signatures compare with each other? On one hand signatures answering the same biological question should overlap, whereas signatures predicting different cancer types should differ. On the other hand, there could also be a Universal Cancer Signature that is predictive independently of the cancer type. Initially, we generate signatures for all datasets using classical approaches such as t-test and fold change and then, we explore signatures resulting from a network-based method, that applies the random surfer model of Google's PageRank algorithm. We show that the signatures as published by the authors and the signatures generated with classical methods do not overlap - not even for the same cancer type - whereas the network-based signatures strongly overlap. Selecting 10 out of 37 universal cancer genes gives the optimal prediction for all cancers thus taking a first step towards a Universal Cancer Signature. We furthermore analyze and discuss the involved genes in terms of the Hallmarks of cancer and in particular single out SP1, JUN/FOS and NFKB1 and examine their specific role in cancer progression.

  7. Torsional vibrations of shafts of mechanical systems

    NASA Astrophysics Data System (ADS)

    Gulevsky, V. A.; Belyaev, A. N.; Trishina, T. V.

    2018-03-01

    The aim of the research is to compare the calculated dependencies for determining the equivalent rigidity of a mechanical system and to come to an agreement on the methods of compiling dynamic models for systems with elastic reducer couplings in applied and classical oscillation theories. As a result of the analysis, it was revealed that most of the damage in the mechanisms and their details is due to the appearance of oscillations due to the dynamic impact of various factors: shock and alternating loads, unbalanced parts of machines, etc. Therefore, the designer at the design stage, and the engineer in the process of operation should provide the possibility of regulating the oscillatory processes both in details and machines by means of creating rational designs, as well as the use of special devices such as vibration dampers, various vibrators with optimal characteristics. A method is proposed for deriving a formula for determining the equivalent stiffness of a double-mass oscillating system of a multistage reducer with elastic reducer links without taking into account the internal losses and inertia of its elements, which gives a result completely coinciding with the result obtained by the classical theory of small mechanical oscillations and allows eliminating formulas for reducing the moments of inertia of the flywheel masses and the stiffness of the shafts.

  8. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  9. Hybrid genetic algorithm in the Hopfield network for maximum 2-satisfiability problem

    NASA Astrophysics Data System (ADS)

    Kasihmuddin, Mohd Shareduwan Mohd; Sathasivam, Saratha; Mansor, Mohd. Asyraf

    2017-08-01

    Heuristic method was designed for finding optimal solution more quickly compared to classical methods which are too complex to comprehend. In this study, a hybrid approach that utilizes Hopfield network and genetic algorithm in doing maximum 2-Satisfiability problem (MAX-2SAT) was proposed. Hopfield neural network was used to minimize logical inconsistency in interpretations of logic clauses or program. Genetic algorithm (GA) has pioneered the implementation of methods that exploit the idea of combination and reproduce a better solution. The simulation incorporated with and without genetic algorithm will be examined by using Microsoft Visual 2013 C++ Express software. The performance of both searching techniques in doing MAX-2SAT was evaluate based on global minima ratio, ratio of satisfied clause and computation time. The result obtained form the computer simulation demonstrates the effectiveness and acceleration features of genetic algorithm in doing MAX-2SAT in Hopfield network.

  10. On iterative processes in the Krylov-Sonneveld subspaces

    NASA Astrophysics Data System (ADS)

    Ilin, Valery P.

    2016-10-01

    The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.

  11. Optimization of the graph model of the water conduit network, based on the approach of search space reducing

    NASA Astrophysics Data System (ADS)

    Korovin, Iakov S.; Tkachenko, Maxim G.

    2018-03-01

    In this paper we present a heuristic approach, improving the efficiency of methods, used for creation of efficient architecture of water distribution networks. The essence of the approach is a procedure of search space reduction the by limiting the range of available pipe diameters that can be used for each edge of the network graph. In order to proceed the reduction, two opposite boundary scenarios for the distribution of flows are analysed, after which the resulting range is further narrowed by applying a flow rate limitation for each edge of the network. The first boundary scenario provides the most uniform distribution of the flow in the network, the opposite scenario created the net with the highest possible flow level. The parameters of both distributions are calculated by optimizing systems of quadratic functions in a confined space, which can be effectively performed with small time costs. This approach was used to modify the genetic algorithm (GA). The proposed GA provides a variable number of variants of each gene, according to the number of diameters in list, taking into account flow restrictions. The proposed approach was implemented to the evaluation of a well-known test network - the Hanoi water distribution network [1], the results of research were compared with a classical GA with an unlimited search space. On the test data, the proposed trip significantly reduced the search space and provided faster and more obvious convergence in comparison with the classical version of GA.

  12. A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831

  13. A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.

    PubMed

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.

  14. Residues with similar hexagon neighborhoods share similar side-chain conformations.

    PubMed

    Li, Shuai Cheng; Bu, Dongbo; Li, Ming

    2012-01-01

    We present in this study a new approach to code protein side-chain conformations into hexagon substructures. Classical side-chain packing methods consist of two steps: first, side-chain conformations, known as rotamers, are extracted from known protein structures as candidates for each residue; second, a searching method along with an energy function is used to resolve conflicts among residues and to optimize the combinations of side chain conformations for all residues. These methods benefit from the fact that the number of possible side-chain conformations is limited, and the rotamer candidates are readily extracted; however, these methods also suffer from the inaccuracy of energy functions. Inspired by threading and Ab Initio approaches to protein structure prediction, we propose to use hexagon substructures to implicitly capture subtle issues of energy functions. Our initial results indicate that even without guidance from an energy function, hexagon structures alone can capture side-chain conformations at an accuracy of 83.8 percent, higher than 82.6 percent by the state-of-art side-chain packing methods.

  15. A swarm-trained k-nearest prototypes adaptive classifier with automatic feature selection for interval data.

    PubMed

    Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C

    2016-08-01

    Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators

    NASA Technical Reports Server (NTRS)

    Fantini, Jay A.

    1998-01-01

    Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demkowicz-Dobrzanski, Rafal; Lewenstein, Maciej; Institut fuer Theoretische Physik, Universitaet Hannover, D-30167 Hannover

    We solve the problem of the optimal cloning of pure entangled two-qubit states with a fixed degree of entanglement using local operations and classical communication. We show that, amazingly, classical communication between the parties can improve the fidelity of local cloning if and only if the initial entanglement is higher than a certain critical value. It is completely useless for weakly entangled states. We also show that bound entangled states with positive partial transpose are not useful as a resource to improve the best local cloning fidelity.

  18. Refined genetic algorithm -- Economic dispatch example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheble, G.B.; Brittig, K.

    1995-02-01

    A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.

  19. Direct and reverse secret-key capacities of a quantum channel.

    PubMed

    Pirandola, Stefano; García-Patrón, Raul; Braunstein, Samuel L; Lloyd, Seth

    2009-02-06

    We define the direct and reverse secret-key capacities of a memoryless quantum channel as the optimal rates that entanglement-based quantum-key-distribution protocols can reach by using a single forward classical communication (direct reconciliation) or a single feedback classical communication (reverse reconciliation). In particular, the reverse secret-key capacity can be positive for antidegradable channels, where no forward strategy is known to be secure. This property is explicitly shown in the continuous variable framework by considering arbitrary one-mode Gaussian channels.

  20. Thermodynamic efficiency limits of classical and bifacial multi-junction tandem solar cells: An analytical approach

    NASA Astrophysics Data System (ADS)

    Alam, Muhammad Ashraful; Khan, M. Ryyan

    2016-10-01

    Bifacial tandem cells promise to reduce three fundamental losses (i.e., above-bandgap, below bandgap, and the uncollected light between panels) inherent in classical single junction photovoltaic (PV) systems. The successive filtering of light through the bandgap cascade and the requirement of current continuity make optimization of tandem cells difficult and accessible only to numerical solution through computer modeling. The challenge is even more complicated for bifacial design. In this paper, we use an elegantly simple analytical approach to show that the essential physics of optimization is intuitively obvious, and deeply insightful results can be obtained with a few lines of algebra. This powerful approach reproduces, as special cases, all of the known results of conventional and bifacial tandem cells and highlights the asymptotic efficiency gain of these technologies.

  1. Interferometry with non-classical motional states of a Bose-Einstein condensate.

    PubMed

    van Frank, S; Negretti, A; Berrada, T; Bücker, R; Montangero, S; Schaff, J-F; Schumm, T; Calarco, T; Schmiedmayer, J

    2014-05-30

    The Ramsey interferometer is a prime example of precise control at the quantum level. It is usually implemented using internal states of atoms, molecules or ions, for which powerful manipulation procedures are now available. Whether it is possible to control external degrees of freedom of more complex, interacting many-body systems at this level remained an open question. Here we demonstrate a two-pulse Ramsey-type interferometer for non-classical motional states of a Bose-Einstein condensate in an anharmonic trap. The control sequences used to manipulate the condensate wavefunction are obtained from optimal control theory and are directly optimized to maximize the interferometric contrast. They permit a fast manipulation of the atomic ensemble compared to the intrinsic decay processes and many-body dephasing effects. This allows us to reach an interferometric contrast of 92% in the experimental implementation.

  2. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-dimensional Bayesian Nonlinear Inverse Problems with Application to Porous Medium Flow

    NASA Astrophysics Data System (ADS)

    Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.

    2015-12-01

    We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.

  3. Development of a Double Glass Mounting Method Using Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) and its Evaluation for Permanent Mounting of Small Nematodes

    PubMed Central

    ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh

    2015-01-01

    Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729

  4. Optimal simultaneous superpositioning of multiple structures with missing data

    PubMed Central

    Theobald, Douglas L.; Steindel, Phillip A.

    2012-01-01

    Motivation: Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually ‘missing’ from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Results: Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation–maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. Availability and implementation: The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. Contact: dtheobald@brandeis.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22543369

  5. Improvement of the cruise performances of a wing by means of aerodynamic optimization. Validation with a Far-Field method

    NASA Astrophysics Data System (ADS)

    Jiménez-Varona, J.; Ponsin Roca, J.

    2015-06-01

    Under a contract with AIRBUS MILITARY (AI-M), an exercise to analyze the potential of optimization techniques to improve the wing performances at cruise conditions has been carried out by using an in-house design code. The original wing was provided by AI-M and several constraints were posed for the redesign. To maximize the aerodynamic efficiency at cruise, optimizations were performed using the design techniques developed internally at INTA under a research program (Programa de Termofluidodinámica). The code is a gradient-based optimizaa tion code, which uses classical finite differences approach for gradient computations. Several techniques for search direction computation are implemented for unconstrained and constrained problems. Techniques for geometry modifications are based on different approaches which include perturbation functions for the thickness and/or mean line distributions and others by Bézier curves fitting of certain degree. It is very e important to afford a real design which involves several constraints that reduce significantly the feasible design space. And the assessment of the code is needed in order to check the capabilities and the possible drawbacks. Lessons learnt will help in the development of future enhancements. In addition, the validation of the results was done using also the well-known TAU flow solver and a far-field drag method in order to determine accurately the improvement in terms of drag counts.

  6. Simulation of wave packet tunneling of interacting identical particles

    NASA Astrophysics Data System (ADS)

    Lozovik, Yu. E.; Filinov, A. V.; Arkhipov, A. S.

    2003-02-01

    We demonstrate a different method of simulation of nonstationary quantum processes, considering the tunneling of two interacting identical particles, represented by wave packets. The used method of quantum molecular dynamics (WMD) is based on the Wigner representation of quantum mechanics. In the context of this method ensembles of classical trajectories are used to solve quantum Wigner-Liouville equation. These classical trajectories obey Hamiltonian-like equations, where the effective potential consists of the usual classical term and the quantum term, which depends on the Wigner function and its derivatives. The quantum term is calculated using local distribution of trajectories in phase space, therefore, classical trajectories are not independent, contrary to classical molecular dynamics. The developed WMD method takes into account the influence of exchange and interaction between particles. The role of direct and exchange interactions in tunneling is analyzed. The tunneling times for interacting particles are calculated.

  7. Application of Reduced Order Transonic Aerodynamic Influence Coefficient Matrix for Design Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley W.

    2009-01-01

    Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.

  8. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  9. Mining chemical reactions using neighborhood behavior and condensed graphs of reactions approaches.

    PubMed

    de Luca, Aurélie; Horvath, Dragos; Marcou, Gilles; Solov'ev, Vitaly; Varnek, Alexandre

    2012-09-24

    This work addresses the problem of similarity search and classification of chemical reactions using Neighborhood Behavior (NB) and Condensed Graphs of Reaction (CGR) approaches. The CGR formalism represents chemical reactions as a classical molecular graph with dynamic bonds, enabling descriptor calculations on this graph. Different types of the ISIDA fragment descriptors generated for CGRs in combination with two metrics--Tanimoto and Euclidean--were considered as chemical spaces, to serve for reaction dissimilarity scoring. The NB method has been used to select an optimal combination of descriptors which distinguish different types of chemical reactions in a database containing 8544 reactions of 9 classes. Relevance of NB analysis has been validated in generic (multiclass) similarity search and in clustering with Self-Organizing Maps (SOM). NB-compliant sets of descriptors were shown to display enhanced mapping propensities, allowing the construction of better Self-Organizing Maps and similarity searches (NB and classical similarity search criteria--AUC ROC--correlate at a level of 0.7). The analysis of the SOM clusters proved chemically meaningful CGR substructures representing specific reaction signatures.

  10. Fireworks algorithm for mean-VaR/CVaR models

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Liu, Zhifeng

    2017-10-01

    Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.

  11. Generalized t-statistic for two-group classification.

    PubMed

    Komori, Osamu; Eguchi, Shinto; Copas, John B

    2015-06-01

    In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.

  12. Bayesian cross-entropy methodology for optimal design of validation experiments

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Mahadevan, S.

    2006-07-01

    An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.

  13. [Navigated implantation of total knee endoprostheses--a comparative study with conventional instrumentation].

    PubMed

    Jenny, J Y; Boeri, C

    2001-01-01

    A navigation system should improve the quality of a total knee prosthesis implantation in comparison to the classical, surgeon-controlled operative technique. The authors have implanted 40 knee total prostheses with an optical infrared navigation system (Orthopilot AESCULAP, Tuttlingen--group A). The quality of implantation was studied on postoperative long leg AP and lateral X-rays, and compared to a control group of 40 computer-paired total knee prostheses o the same model (Search Prosthesis, AESCULAP, Tuttlingen) implanted with a classical, surgeon-controlled technique (group B). An optimal mechanical femorotibial angle (3 degrees valgus to 3 degrees varus) was obtained by 33 cases in group A and 31 cases in group B (p > 0.05). Better results were seen for the coronal and sagittal orientation of both tibial and femoral components in group A. Globally, 26 cases of the group A and 12 cases of the group B were implanted in an optimal manner for all studied criteria (p < 0.01). The used navigation system allows a significant improvement of the quality of implantation of a knee total prosthesis in comparison to a classical, surgeon-controlled instrumentation. Long-term outcome could be consequently improved.

  14. Reaching Agreement in Quantum Hybrid Networks.

    PubMed

    Shi, Guodong; Li, Bo; Miao, Zibo; Dower, Peter M; James, Matthew R

    2017-07-20

    We consider a basic quantum hybrid network model consisting of a number of nodes each holding a qubit, for which the aim is to drive the network to a consensus in the sense that all qubits reach a common state. Projective measurements are applied serving as control means, and the measurement results are exchanged among the nodes via classical communication channels. In this way the quantum-opeartion/classical-communication nature of hybrid quantum networks is captured, although coherent states and joint operations are not taken into consideration in order to facilitate a clear and explicit analysis. We show how to carry out centralized optimal path planning for this network with all-to-all classical communications, in which case the problem becomes a stochastic optimal control problem with a continuous action space. To overcome the computation and communication obstacles facing the centralized solutions, we also develop a distributed Pairwise Qubit Projection (PQP) algorithm, where pairs of nodes meet at a given time and respectively perform measurements at their geometric average. We show that the qubit states are driven to a consensus almost surely along the proposed PQP algorithm, and that the expected qubit density operators converge to the average of the network's initial values.

  15. Beam orientation optimization for intensity-modulated radiation therapy using mixed integer programming

    NASA Astrophysics Data System (ADS)

    Yang, Ruijie; Dai, Jianrong; Yang, Yong; Hu, Yimin

    2006-08-01

    The purpose of this study is to extend an algorithm proposed for beam orientation optimization in classical conformal radiotherapy to intensity-modulated radiation therapy (IMRT) and to evaluate the algorithm's performance in IMRT scenarios. In addition, the effect of the candidate pool of beam orientations, in terms of beam orientation resolution and starting orientation, on the optimized beam configuration, plan quality and optimization time is also explored. The algorithm is based on the technique of mixed integer linear programming in which binary and positive float variables are employed to represent candidates for beam orientation and beamlet weights in beam intensity maps. Both beam orientations and beam intensity maps are simultaneously optimized in the algorithm with a deterministic method. Several different clinical cases were used to test the algorithm and the results show that both target coverage and critical structures sparing were significantly improved for the plans with optimized beam orientations compared to those with equi-spaced beam orientations. The calculation time was less than an hour for the cases with 36 binary variables on a PC with a Pentium IV 2.66 GHz processor. It is also found that decreasing beam orientation resolution to 10° greatly reduced the size of the candidate pool of beam orientations without significant influence on the optimized beam configuration and plan quality, while selecting different starting orientations had large influence. Our study demonstrates that the algorithm can be applied to IMRT scenarios, and better beam orientation configurations can be obtained using this algorithm. Furthermore, the optimization efficiency can be greatly increased through proper selection of beam orientation resolution and starting beam orientation while guaranteeing the optimized beam configurations and plan quality.

  16. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  17. Phase transition of Surprise optimization in community detection

    NASA Astrophysics Data System (ADS)

    Xiang, Ju; Tang, Yan-Ni; Gao, Yuan-Yuan; Liu, Lang; Hao, Yi; Li, Jian-Ming; Zhang, Yan; Chen, Shi

    2018-02-01

    Community detection is one of important issues in the research of complex networks. In literatures, many methods have been proposed to detect community structures in the networks, while they also have the scope of application themselves. In this paper, we investigate an important measure for community detection, Surprise (Aldecoa and Marín, Sci. Rep. 3 (2013) 1060), by focusing on the critical points in the merging and splitting of communities. We firstly analyze the critical behavior of Surprise and give the phase diagrams in community-partition transition. The results show that the critical number of communities for Surprise has a super-exponential increase with the increase of the link-density difference, while it is close to that of Modularity for small difference between inter- and intra-community link densities. By directly optimizing Surprise, we experimentally test the results on various networks, following a series of comparisons with other classical methods, and further find that the heterogeneity of networks could quicken the splitting of communities. On the whole, the results show that Surprise tends to split communities due to various reasons such as the heterogeneity in link density, degree and community size, and it thus exhibits higher resolution than other methods, e.g., Modularity, in community detection. Finally, we provide several approaches for enhancing Surprise.

  18. Vitro culture of axe-head glochidia in pink heelsplitter Potamilus alatus and mechanism of its high host specialists

    PubMed Central

    Ma, Xue Yan; Zheng, Bing Qing; Xu, Pao; Xu, Liang; Hua, Dan; Yuan, Xin Hua; Gu, Ruo Bo

    2018-01-01

    The basal media M199 or MEM was utilized in the classical method of vitro culture of glochidia where 1–5% CO2 was required to maintain stable physiological pH for completion of non-parasitic metamorphosis. The classical method encounters a great challenge to those glochidia which undergo development of visceral tissue but significantly increase in size during metamorphosis. The improved in vitro culture techniques and classical methods were firstly compared for non-parasitic metamorphosis and development of glochidia in pink heelsplitter. Based on the improved method, the optimal vitro culture media was further selected from 14 plasmas or sera, realizing the non-parasitic metamorphosis of axe-head glochidia for the first time. The results showed that addition of different plasma (serum) had significant effect on glochidial metamorphosis in pink heelsplitter. Only glochidia in the skewband grunt and red drum groups could complete metamorphosis, the metamorphosis rate in skewband grunt was 93.3±3.1% at 24±0.5°C, significantly higher than in marine and desalinated red drum. Heat-inactivated treatment on the plasma of yellow catfish and Barbus capito had significant effect on glochidia survival and shell growth. The metamorphosis rate also varied among different gravid period, and generally decreased with gravid time. Further comparison of free amino acid and fatty acid indicated that the taurine of high concentration was the only amino acid that might promote the rapid growth of glochidial shell, and the lack of adequate DPA and DHA might be an important reason leading to the abnormal foot and visceral development. Combined with our results of artificial selection of host fish, we tentatively established the mechanism of its host specialists in pink heelsplitter for the first time. This is the first report on non-parasite metamorphosis of axe-head glochidia based on our improved vitro culture method, which should provide important reference to fundamental theory research of glochidia metamorphosis and also benefit for better understand of mechanism of host specialists and generalists of Unionidae species. PMID:29447194

  19. A comparative study of different methods for calculating electronic transition rates

    NASA Astrophysics Data System (ADS)

    Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan

    2018-03-01

    We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.

  20. Development of a Double Glass Mounting Method Using Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) and its Evaluation for Permanent Mounting of Small Nematodes.

    PubMed

    Zahabiun, Farzaneh; Sadjjadi, Seyed Mahmoud; Esfandiari, Farideh

    2015-01-01

    Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Using this method is cost effective and fast for mounting of small nematodes comparing to classic method.

  1. Equivalence-point electromigration acid-base titration via moving neutralization boundary electrophoresis.

    PubMed

    Yang, Qing; Fan, Liu-Yin; Huang, Shan-Sheng; Zhang, Wei; Cao, Cheng-Xi

    2011-04-01

    In this paper, we developed a novel method of acid-base titration, viz. the electromigration acid-base titration (EABT), via a moving neutralization boundary (MNR). With HCl and NaOH as the model strong acid and base, respectively, we conducted the experiments on the EABT via the method of moving neutralization boundary for the first time. The experiments revealed that (i) the concentration of agarose gel, the voltage used and the content of background electrolyte (KCl) had evident influence on the boundary movement; (ii) the movement length was a function of the running time under the constant acid and base concentrations; and (iii) there was a good linearity between the length and natural logarithmic concentration of HCl under the optimized conditions, and the linearity could be used to detect the concentration of acid. The experiments further manifested that (i) the RSD values of intra-day and inter-day runs were less than 1.59 and 3.76%, respectively, indicating similar precision and stability in capillary electrophoresis or HPLC; (ii) the indicators with different pK(a) values had no obvious effect on EABT, distinguishing strong influence on the judgment of equivalence-point titration in the classic one; and (iii) the constant equivalence-point titration always existed in the EABT, rather than the classic volumetric analysis. Additionally, the EABT could be put to good use for the determination of actual acid concentrations. The experimental results achieved herein showed a new general guidance for the development of classic volumetric analysis and element (e.g. nitrogen) content analysis in protein chemistry. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Theoretical analysis of evaporative cooling of classic heat stroke patients

    NASA Astrophysics Data System (ADS)

    Alzeer, Abdulaziz H.; Wissler, E. H.

    2018-05-01

    Heat stroke is a serious health concern globally, which is associated with high mortality. Newer treatments must be designed to improve outcomes. The aim of this study is to evaluate the effect of variations in ambient temperature and wind speed on the rate of cooling in a simulated heat stroke subject using the dynamic model of Wissler. We assume that a 60-year-old 70-kg female suffers classic heat stroke after walking fully exposed to the sun for 4 h while the ambient temperature is 40 °C, relative humidity is 20%, and wind speed is 2.5 m/s-1. Her esophageal and skin temperatures are 41.9 and 40.7 °C at the time of collapse. Cooling is accomplished by misting with lukewarm water while exposed to forced airflow at a temperature of 20 to 40 °C and a velocity of 0.5 or 1 m/s-1. Skin blood flow is assumed to be either normal, one-half of normal, or twice normal. At wind speed of 0.5 m/s-1 and normal skin blood flow, the air temperature decreased from 40 to 20 °C, increased cooling, and reduced time required to reach to a desired temperature of 38 °C. This relationship was also maintained in reduced blood flow states. Increasing wind speed to 1 m/s-1 increased cooling and reduced the time to reach optimal temperature both in normal and reduced skin blood flow states. In conclusion, evaporative cooling methods provide an effective method for cooling classic heat stroke patients. The maximum heat dissipation from the simulated model of Wissler was recorded when the entire body was misted with lukewarm water and applied forced air at 1 m/s at temperature of 20 °C.

  3. Evaluating the performance of the particle finite element method in parallel architectures

    NASA Astrophysics Data System (ADS)

    Gimenez, Juan M.; Nigro, Norberto M.; Idelsohn, Sergio R.

    2014-05-01

    This paper presents a high performance implementation for the particle-mesh based method called particle finite element method two (PFEM-2). It consists of a material derivative based formulation of the equations with a hybrid spatial discretization which uses an Eulerian mesh and Lagrangian particles. The main aim of PFEM-2 is to solve transport equations as fast as possible keeping some level of accuracy. The method was found to be competitive with classical Eulerian alternatives for these targets, even in their range of optimal application. To evaluate the goodness of the method with large simulations, it is imperative to use of parallel environments. Parallel strategies for Finite Element Method have been widely studied and many libraries can be used to solve Eulerian stages of PFEM-2. However, Lagrangian stages, such as streamline integration, must be developed considering the parallel strategy selected. The main drawback of PFEM-2 is the large amount of memory needed, which limits its application to large problems with only one computer. Therefore, a distributed-memory implementation is urgently needed. Unlike a shared-memory approach, using domain decomposition the memory is automatically isolated, thus avoiding race conditions; however new issues appear due to data distribution over the processes. Thus, a domain decomposition strategy for both particle and mesh is adopted, which minimizes the communication between processes. Finally, performance analysis running over multicore and multinode architectures are presented. The Courant-Friedrichs-Lewy number used influences the efficiency of the parallelization and, in some cases, a weighted partitioning can be used to improve the speed-up. However the total cputime for cases presented is lower than that obtained when using classical Eulerian strategies.

  4. Multiscale estimation of excess mass from gravity data

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; Fedi, Maurizio; Florio, Giovanni

    2014-06-01

    We describe a multiscale method to estimate the excess mass of gravity anomaly sources, based on the theory of source moments. Using a multipole expansion of the potential field and considering only the data along the vertical direction, a system of linear equations is obtained. The choice of inverting data along a vertical profile can help us to reduce the interference effects due to nearby anomalies and will allow a local estimate of the source parameters. A criterion is established allowing the selection of the optimal highest altitude of the vertical profile data and truncation order of the series expansion. The inversion provides an estimate of the total anomalous mass and of the depth to the centre of mass. The method has several advantages with respect to classical methods, such as the Gauss' method: (i) we need just a 1-D inversion to obtain our estimates, being the inverted data sampled along a single vertical profile; (ii) the resolution may be straightforward enhanced by using vertical derivatives; (iii) the centre of mass is also estimated, besides the excess mass; (iv) the method is very robust versus noise; (v) the profile may be chosen in such a way to minimize the effects from interfering anomalies or from side effects due to the a limited area extension. The multiscale estimation of excess mass method can be successfully used in various fields of application. Here, we analyse the gravity anomaly generated by a sulphide body in the Skelleftea ore district, North Sweden, obtaining source mass and volume estimates in agreement with the known information. We show also that these estimates are substantially improved with respect to those obtained with the classical approach.

  5. Big climate data analysis

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred

    2015-04-01

    The Big Data era has begun also in the climate sciences, not only in economics or molecular biology. We measure climate at increasing spatial resolution by means of satellites and look farther back in time at increasing temporal resolution by means of natural archives and proxy data. We use powerful supercomputers to run climate models. The model output of the calculations made for the IPCC's Fifth Assessment Report amounts to ~650 TB. The 'scientific evolution' of grid computing has started, and the 'scientific revolution' of quantum computing is being prepared. This will increase computing power, and data amount, by several orders of magnitude in the future. However, more data does not automatically mean more knowledge. We need statisticians, who are at the core of transforming data into knowledge. Statisticians notably also explore the limits of our knowledge (uncertainties, that is, confidence intervals and P-values). Mudelsee (2014 Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Second edition. Springer, Cham, xxxii + 454 pp.) coined the term 'optimal estimation'. Consider the hyperspace of climate estimation. It has many, but not infinite, dimensions. It consists of the three subspaces Monte Carlo design, method and measure. The Monte Carlo design describes the data generating process. The method subspace describes the estimation and confidence interval construction. The measure subspace describes how to detect the optimal estimation method for the Monte Carlo experiment. The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a data sample, some prior information (e.g. measurement standard errors) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace to find the suitable method, that is, the mode of estimation and uncertainty-measure determination that optimizes a selected measure for prescribed values close to the initial estimates. Also here, intelligent exploration methods (gradient, Brent, etc.) are useful. The third task is to apply the optimal estimation method to the climate dataset. This conference paper illustrates by means of three examples that optimal estimation has the potential to shape future big climate data analysis. First, we consider various hypothesis tests to study whether climate extremes are increasing in their occurrence. Second, we compare Pearson's and Spearman's correlation measures. Third, we introduce a novel estimator of the tail index, which helps to better quantify climate-change related risks.

  6. Objectified quantification of uncertainties in Bayesian atmospheric inversions

    NASA Astrophysics Data System (ADS)

    Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.

    2015-05-01

    Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.

  7. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  8. Does asymmetric correlation affect portfolio optimization?

    NASA Astrophysics Data System (ADS)

    Fryd, Lukas

    2017-07-01

    The classical portfolio optimization problem does not assume asymmetric behavior of relationship among asset returns. The existence of asymmetric response in correlation on the bad news could be important information in portfolio optimization. The paper applies Dynamic conditional correlation model (DCC) and his asymmetric version (ADCC) to propose asymmetric behavior of conditional correlation. We analyse asymmetric correlation among S&P index, bonds index and spot gold price before mortgage crisis in 2008. We evaluate forecast ability of the models during and after mortgage crisis and demonstrate the impact of asymmetric correlation on the reduction of portfolio variance.

  9. H-classic: a new method to identify classic articles in Implant Dentistry, Periodontics, and Oral Surgery.

    PubMed

    De la Flor-Martínez, Maria; Galindo-Moreno, Pablo; Sánchez-Fernández, Elena; Piattelli, Adriano; Cobo, Manuel Jesus; Herrera-Viedma, Enrique

    2016-10-01

    The study of classic papers permits analysis of the past, present, and future of a specific area of knowledge. This type of analysis is becoming more frequent and more sophisticated. Our objective was to use the H-classics method, based on the h-index, to analyze classic papers in Implant Dentistry, Periodontics, and Oral Surgery (ID, P, and OS). First, an electronic search of documents related to ID, P, and OS was conducted in journals indexed in Journal Citation Reports (JCR) 2014 within the category 'Dentistry, Oral Surgery & Medicine'. Second, Web of Knowledge databases were searched using Mesh terms related to ID, P, and OS. Finally, the H-classics method was applied to select the classic articles in these disciplines, collecting data on associated research areas, document type, country, institutions, and authors. Of 267,611 documents related to ID, P, and OS retrieved from JCR journals (2014), 248 were selected as H-classics. They were published in 35 journals between 1953 and 2009, most frequently in the Journal of Clinical Periodontology (18.95%), the Journal of Periodontology (18.54%), International Journal of Oral and Maxillofacial Implants (9.27%), and Clinical Oral Implant Research (6.04%). These classic articles derived from the USA in 49.59% of cases and from Europe in 47.58%, while the most frequent host institution was the University of Gothenburg (17.74%) and the most frequent authors were J. Lindhe (10.48%) and S. Socransky (8.06%). The H-classics approach offers an objective method to identify core knowledge in clinical disciplines such as ID, P, and OS. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Reflections on Quantum Data Hiding

    NASA Astrophysics Data System (ADS)

    Winter, Andreas

    Quantum data hiding, originally invented as a limitation on local operations and classical communications (LOCC) in distinguishing globally orthogonal states, is actually a phenomenon arising generically in statistics whenever comparing a `strong' set of measurements (i.e., decision rules) with a `weak' one. The classical statistical analogue of this would be secret sharing, in which two perfectly distinguishable multi-partite hypotheses appear to be indistinguishable when accessing only a marginal. The quantum versions are richer in that for example LOCC allows for state tomography, so the states cannot be come perfectly indistinguishable but only nearly so, and hence the question is one of efficiency. I will discuss two concrete examples and associated sets of problems: 1. Gaussian operations and classical computation (GOCC): Not very surprisingly, GOCC cannot distinguish optimally even two coherent states of a single mode. Here we find states, each a mixture of multi-mode coherent states, which are almost perfectly distinguishable by suitable measurements, by when restricted to GOCC, i.e. linear optics and post-processing, the states appear almost identical. The construction is random and relies on coding arguments. Open questions include whether there one can give a constructive version of the argument, and whether for instance even thermal states can be used, or how efficient the hiding is. 2. Local operation and classical communication (LOCC): It is well-known that in a bipartite dxd-system, asymptotically logd bits can be hidden. Here we show for the first time, using the calculus of min-entropies, that this is asymptotically optimal. In fact, we get bounds on the data hiding capacity of any preparation system; these are however not always tight. While it is known that data hiding by separable states is possible (i.e. the state preparation can be done by LOCC), it is open whether the optimal information efficiency of (asymptotically) log d bits can be achieved by separable states.

  11. Compiling Planning into Quantum Optimization Problems: A Comparative Study

    DTIC Science & Technology

    2015-06-07

    and Sipser, M. 2000. Quantum computation by adiabatic evolution. arXiv:quant- ph/0001106. Fikes, R. E., and Nilsson, N. J. 1972. STRIPS: A new...become available: quantum annealing. Quantum annealing is one of the most accessible quantum algorithms for a computer sci- ence audience not versed...in quantum computing because of its close ties to classical optimization algorithms such as simulated annealing. While large-scale universal quantum

  12. Noncoherent detection of periodic signals

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1974-01-01

    The optimal Bayes detector for a general periodic waveform having uniform delay and additive white Gaussian noise is examined. It is shown that the detector is much more complex than that for the well known cases of pure sine waves (i.e. classical noncoherent detection) and narrowband signals. An interpretation of the optimal processing is presented, and several implementations are discussed. The results have application to the noncoherent detection of optical square waves.

  13. Density-Functional Theory with Dispersion-Correcting Potentials for Methane: Bridging the Efficiency and Accuracy Gap between High-Level Wave Function and Classical Molecular Mechanics Methods.

    PubMed

    Torres, Edmanuel; DiLabio, Gino A

    2013-08-13

    Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.

  14. Thermodynamic integration from classical to quantum mechanics.

    PubMed

    Habershon, Scott; Manolopoulos, David E

    2011-12-14

    We present a new method for calculating quantum mechanical corrections to classical free energies, based on thermodynamic integration from classical to quantum mechanics. In contrast to previous methods, our method is numerically stable even in the presence of strong quantum delocalization. We first illustrate the method and its relationship to a well-established method with an analysis of a one-dimensional harmonic oscillator. We then show that our method can be used to calculate the quantum mechanical contributions to the free energies of ice and water for a flexible water model, a problem for which the established method is unstable. © 2011 American Institute of Physics

  15. A new fitting method for measurement of the curvature radius of a short arc with high precision

    NASA Astrophysics Data System (ADS)

    Tao, Wei; Zhong, Hong; Chen, Xiao; Selami, Yassine; Zhao, Hui

    2018-07-01

    The measurement of an object with a short arc is widely encountered in scientific research and industrial production. As the most classic method of arc fitting, the least squares fitting method suffers from low precision when it is used for measurement of arcs with smaller central angles and fewer sampling points. The shorter the arc, the lower is the measurement accuracy. In order to improve the measurement precision of short arcs, a parameter constrained fitting method based on a four-parameter circle equation is proposed in this paper. The generalized Lagrange function was introduced together with the optimization by gradient descent method to reduce the influence from noise. The simulation and experimental results showed that the proposed method has high precision even when the central angle drops below 4° and it has good robustness when the noise standard deviation rises to 0.4 mm. This new fitting method is suitable for the high precision measurement of short arcs with smaller central angles without any prior information.

  16. Penalized nonparametric scalar-on-function regression via principal coordinates

    PubMed Central

    Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu

    2016-01-01

    A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963

  17. Practical quantum random number generator based on measuring the shot noise of vacuum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen Yong; Zou Hongxin; Tian Liang

    2010-06-15

    The shot noise of vacuum states is a kind of quantum noise and is totally random. In this paper a nondeterministic random number generation scheme based on measuring the shot noise of vacuum states is presented and experimentally demonstrated. We use a homodyne detector to measure the shot noise of vacuum states. Considering that the frequency bandwidth of our detector is limited, we derive the optimal sampling rate so that sampling points have the least correlation with each other. We also choose a method to extract random numbers from sampling values, and prove that the influence of classical noise canmore » be avoided with this method so that the detector does not have to be shot-noise limited. The random numbers generated with this scheme have passed ent and diehard tests.« less

  18. Direct power control of DFIG wind turbine systems based on an intelligent proportional-integral sliding mode control.

    PubMed

    Li, Shanzhi; Wang, Haoping; Tian, Yang; Aitouch, Abdel; Klein, John

    2016-09-01

    This paper presents an intelligent proportional-integral sliding mode control (iPISMC) for direct power control of variable speed-constant frequency wind turbine system. This approach deals with optimal power production (in the maximum power point tracking sense) under several disturbance factors such as turbulent wind. This controller is made of two sub-components: (i) an intelligent proportional-integral module for online disturbance compensation and (ii) a sliding mode module for circumventing disturbance estimation errors. This iPISMC method has been tested on FAST/Simulink platform of a 5MW wind turbine system. The obtained results demonstrate that the proposed iPISMC method outperforms the classical PI and intelligent proportional-integral control (iPI) in terms of both active power and response time. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Statistical Mechanics of Coherent Ising Machine — The Case of Ferromagnetic and Finite-Loading Hopfield Models —

    NASA Astrophysics Data System (ADS)

    Aonishi, Toru; Mimura, Kazushi; Utsunomiya, Shoko; Okada, Masato; Yamamoto, Yoshihisa

    2017-10-01

    The coherent Ising machine (CIM) has attracted attention as one of the most effective Ising computing architectures for solving large scale optimization problems because of its scalability and high-speed computational ability. However, it is difficult to implement the Ising computation in the CIM because the theories and techniques of classical thermodynamic equilibrium Ising spin systems cannot be directly applied to the CIM. This means we have to adapt these theories and techniques to the CIM. Here we focus on a ferromagnetic model and a finite loading Hopfield model, which are canonical models sharing a common mathematical structure with almost all other Ising models. We derive macroscopic equations to capture nonequilibrium phase transitions in these models. The statistical mechanical methods developed here constitute a basis for constructing evaluation methods for other Ising computation models.

  20. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    NASA Astrophysics Data System (ADS)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

Top