Sample records for optimal functional forms

  1. The importance of functional form in optimal control solutions of problems in population dynamics

    USGS Publications Warehouse

    Runge, M.C.; Johnson, F.A.

    2002-01-01

    Optimal control theory is finding increased application in both theoretical and applied ecology, and it is a central element of adaptive resource management. One of the steps in an adaptive management process is to develop alternative models of system dynamics, models that are all reasonable in light of available data, but that differ substantially in their implications for optimal control of the resource. We explored how the form of the recruitment and survival functions in a general population model for ducks affected the patterns in the optimal harvest strategy, using a combination of analytical, numerical, and simulation techniques. We compared three relationships between recruitment and population density (linear, exponential, and hyperbolic) and three relationships between survival during the nonharvest season and population density (constant, logistic, and one related to the compensatory harvest mortality hypothesis). We found that the form of the component functions had a dramatic influence on the optimal harvest strategy and the ultimate equilibrium state of the system. For instance, while it is commonly assumed that a compensatory hypothesis leads to higher optimal harvest rates than an additive hypothesis, we found this to depend on the form of the recruitment function, in part because of differences in the optimal steady-state population density. This work has strong direct consequences for those developing alternative models to describe harvested systems, but it is relevant to a larger class of problems applying optimal control at the population level. Often, different functional forms will not be statistically distinguishable in the range of the data. Nevertheless, differences between the functions outside the range of the data can have an important impact on the optimal harvest strategy. Thus, development of alternative models by identifying a single functional form, then choosing different parameter combinations from extremes on the likelihood profile may end up producing alternatives that do not differ as importantly as if different functional forms had been used. We recommend that biological knowledge be used to bracket a range of possible functional forms, and robustness of conclusions be checked over this range.

  2. Nonparametric variational optimization of reaction coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banushkina, Polina V.; Krivov, Sergei V., E-mail: s.krivov@leeds.ac.uk

    State of the art realistic simulations of complex atomic processes commonly produce trajectories of large size, making the development of automated analysis tools very important. A popular approach aimed at extracting dynamical information consists of projecting these trajectories into optimally selected reaction coordinates or collective variables. For equilibrium dynamics between any two boundary states, the committor function also known as the folding probability in protein folding studies is often considered as the optimal coordinate. To determine it, one selects a functional form with many parameters and trains it on the trajectories using various criteria. A major problem with such anmore » approach is that a poor initial choice of the functional form may lead to sub-optimal results. Here, we describe an approach which allows one to optimize the reaction coordinate without selecting its functional form and thus avoiding this source of error.« less

  3. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    PubMed

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.

  4. Research on design method of the full form ship with minimum thrust deduction factor

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Miao, Ai-qin; Zhang, Zhu-xin

    2015-04-01

    In the preliminary design stage of the full form ships, in order to obtain a hull form with low resistance and maximum propulsion efficiency, an optimization design program for a full form ship with the minimum thrust deduction factor has been developed, which combined the potential flow theory and boundary layer theory with the optimization technique. In the optimization process, the Sequential Unconstrained Minimization Technique (SUMT) interior point method of Nonlinear Programming (NLP) was proposed with the minimum thrust deduction factor as the objective function. An appropriate displacement is a basic constraint condition, and the boundary layer separation is an additional one. The parameters of the hull form modification function are used as design variables. At last, the numerical optimization example for lines of after-body of 50000 DWT product oil tanker was provided, which indicated that the propulsion efficiency was improved distinctly by this optimal design method.

  5. Continuous Optimization on Constraint Manifolds

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1988-01-01

    This paper demonstrates continuous optimization on the differentiable manifold formed by continuous constraint functions. The first order tensor geodesic differential equation is solved on the manifold in both numerical and closed analytic form for simple nonlinear programs. Advantages and disadvantages with respect to conventional optimization techniques are discussed.

  6. Free-form reticulated shell structures searched for maximum buckling strength

    NASA Astrophysics Data System (ADS)

    Takiuchi, Yuji; Kato, Shiro; Nakazawa, Shoji

    2017-10-01

    In this paper, a scheme of shape optimization is proposed for maximum buckling strength of free-form steel reticulated shells. In order to discuss the effectiveness of objective functions with respect to maximizing buckling strength, several different optimizations are applied to shallow steel single layer reticulated shells targeting rigidly jointed tubular members. The objective functions to be compared are linear buckling load, strain energy, initial yield load, and elasto-plastic buckling strength evaluated based on Modified Dunkerley Formula. With respect to obtained free-forms based on the four optimization schemes, both of their elastic buckling and elasto-plastic buckling behaviour are investigated and compared considering geometrical imperfections. As a result, it is concluded that the first and fourth optimization methods are effective from a viewpoint of buckling strength. And the relation between generalized slenderness ratio and appropriate objective function applied in buckling strength maximization is made clear.

  7. Particle Swarm Optimization Toolbox

    NASA Technical Reports Server (NTRS)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.

  8. Fuzzy Adaptive Decentralized Optimal Control for Strict Feedback Nonlinear Large-Scale Systems.

    PubMed

    Sun, Kangkang; Sui, Shuai; Tong, Shaocheng

    2018-04-01

    This paper considers the optimal decentralized fuzzy adaptive control design problem for a class of interconnected large-scale nonlinear systems in strict feedback form and with unknown nonlinear functions. The fuzzy logic systems are introduced to learn the unknown dynamics and cost functions, respectively, and a state estimator is developed. By applying the state estimator and the backstepping recursive design algorithm, a decentralized feedforward controller is established. By using the backstepping decentralized feedforward control scheme, the considered interconnected large-scale nonlinear system in strict feedback form is changed into an equivalent affine large-scale nonlinear system. Subsequently, an optimal decentralized fuzzy adaptive control scheme is constructed. The whole optimal decentralized fuzzy adaptive controller is composed of a decentralized feedforward control and an optimal decentralized control. It is proved that the developed optimal decentralized controller can ensure that all the variables of the control system are uniformly ultimately bounded, and the cost functions are the smallest. Two simulation examples are provided to illustrate the validity of the developed optimal decentralized fuzzy adaptive control scheme.

  9. Concurrent topology optimization for minimization of total mass considering load-carrying capabilities and thermal insulation simultaneously

    NASA Astrophysics Data System (ADS)

    Long, Kai; Wang, Xuan; Gu, Xianguang

    2017-09-01

    The present work introduces a novel concurrent optimization formulation to meet the requirements of lightweight design and various constraints simultaneously. Nodal displacement of macrostructure and effective thermal conductivity of microstructure are regarded as the constraint functions, which means taking into account both the load-carrying capabilities and the thermal insulation properties. The effective properties of porous material derived from numerical homogenization are used for macrostructural analysis. Meanwhile, displacement vectors of macrostructures from original and adjoint load cases are used for sensitivity analysis of the microstructure. Design variables in the form of reciprocal functions of relative densities are introduced and used for linearization of the constraint function. The objective function of total mass is approximately expressed by the second order Taylor series expansion. Then, the proposed concurrent optimization problem is solved using a sequential quadratic programming algorithm, by splitting into a series of sub-problems in the form of the quadratic program. Finally, several numerical examples are presented to validate the effectiveness of the proposed optimization method. The various effects including initial designs, prescribed limits of nodal displacement, and effective thermal conductivity on optimized designs are also investigated. An amount of optimized macrostructures and their corresponding microstructures are achieved.

  10. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    PubMed Central

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  11. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  12. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  13. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  14. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  15. Research on theoretical optimization and experimental verification of minimum resistance hull form based on Rankine source method

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-Ji; Zhang, Zhu-Xin

    2015-09-01

    To obtain low resistance and high efficiency energy-saving ship, minimum total resistance hull form design method is studied based on potential flow theory of wave-making resistance and considering the effects of tail viscous separation. With the sum of wave resistance and viscous resistance as objective functions and the parameters of B-Spline function as design variables, mathematical models are built using Nonlinear Programming Method (NLP) ensuring the basic limit of displacement and considering rear viscous separation. We develop ship lines optimization procedures with intellectual property rights. Series60 is used as parent ship in optimization design to obtain improved ship (Series60-1) theoretically. Then drag tests for the improved ship (Series60-1) is made to get the actual minimum total resistance hull form.

  16. Optimization of an exchange-correlation density functional for water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritz, Michelle; Fernández-Serra, Marivi; Institute for Advanced Computational Science, Stony Brook University, Stony Brook, New York 11794-3800

    2016-06-14

    We describe a method, that we call data projection onto parameter space (DPPS), to optimize an energy functional of the electron density, so that it reproduces a dataset of experimental magnitudes. Our scheme, based on Bayes theorem, constrains the optimized functional not to depart unphysically from existing ab initio functionals. The resulting functional maximizes the probability of being the “correct” parameterization of a given functional form, in the sense of Bayes theory. The application of DPPS to water sheds new light on why density functional theory has performed rather poorly for liquid water, on what improvements are needed, and onmore » the intrinsic limitations of the generalized gradient approximation to electron exchange and correlation. Finally, we present tests of our water-optimized functional, that we call vdW-DF-w, showing that it performs very well for a variety of condensed water systems.« less

  17. Expanded explorations into the optimization of an energy function for protein design

    PubMed Central

    Huang, Yao-ming; Bystroff, Christopher

    2014-01-01

    Nature possesses a secret formula for the energy as a function of the structure of a protein. In protein design, approximations are made to both the structural representation of the molecule and to the form of the energy equation, such that the existence of a general energy function for proteins is by no means guaranteed. Here we present new insights towards the application of machine learning to the problem of finding a general energy function for protein design. Machine learning requires the definition of an objective function, which carries with it the implied definition of success in protein design. We explored four functions, consisting of two functional forms, each with two criteria for success. Optimization was carried out by a Monte Carlo search through the space of all variable parameters. Cross-validation of the optimized energy function against a test set gave significantly different results depending on the choice of objective function, pointing to relative correctness of the built-in assumptions. Novel energy cross-terms correct for the observed non-additivity of energy terms and an imbalance in the distribution of predicted amino acids. This paper expands on the work presented at ACM-BCB, Orlando FL , October 2012. PMID:24384706

  18. Towards an Optimal Gradient-dependent Energy Functional of the PZ-SIC Form

    DOE PAGES

    Jónsson, Elvar Örn; Lehtola, Susi; Jónsson, Hannes

    2015-06-01

    Results of Perdew–Zunger self-interaction corrected (PZ-SIC) density functional theory calculations of the atomization energy of 35 molecules are compared to those of high-level quantum chemistry calculations. While the PBE functional, which is commonly used in calculations of condensed matter, is known to predict on average too high atomization energy (overbinding of the molecules), the application of PZ-SIC gives a large overcorrection and leads to significant underestimation of the atomization energy. The exchange enhancement factor that is optimal for the generalized gradient approximation within the Kohn-Sham (KS) approach may not be optimal for the self-interaction corrected functional. The PBEsol functional, wheremore » the exchange enhancement factor was optimized for solids, gives poor results for molecules in KS but turns out to work better than PBE in PZ-SIC calculations. The exchange enhancement is weaker in PBEsol and the functional is closer to the local density approximation. Furthermore, the drop in the exchange enhancement factor for increasing reduced gradient in the PW91 functional gives more accurate results than the plateaued enhancement in the PBE functional. A step towards an optimal exchange enhancement factor for a gradient dependent functional of the PZ-SIC form is taken by constructing an exchange enhancement factor that mimics PBEsol for small values of the reduced gradient, and PW91 for large values. The average atomization energy is then in closer agreement with the high-level quantum chemistry calculations, but the variance is still large, the F 2 molecule being a notable outlier.« less

  19. Optimization with Fuzzy Data via Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Kosiński, Witold

    2010-09-01

    Order fuzzy numbers (OFN) that make possible to deal with fuzzy inputs quantitatively, exactly in the same way as with real numbers, have been recently defined by the author and his 2 coworkers. The set of OFN forms a normed space and is a partially ordered ring. The case when the numbers are presented in the form of step functions, with finite resolution, simplifies all operations and the representation of defuzzification functionals. A general optimization problem with fuzzy data is formulated. Its fitness function attains fuzzy values. Since the adjoint space to the space of OFN is finite dimensional, a convex combination of all linear defuzzification functionals may be used to introduce a total order and a real-valued fitness function. Genetic operations on individuals representing fuzzy data are defined.

  20. An optimal consumption and investment problem with quadratic utility and negative wealth constraints.

    PubMed

    Roh, Kum-Hwan; Kim, Ji Yeoun; Shin, Yong Hyun

    2017-01-01

    In this paper, we investigate the optimal consumption and portfolio selection problem with negative wealth constraints for an economic agent who has a quadratic utility function of consumption and receives a constant labor income. Due to the property of the quadratic utility function, we separate our problem into two cases and derive the closed-form solutions for each case. We also illustrate some numerical implications of the optimal consumption and portfolio.

  1. The invariant of the stiffness filter function with the weight filter function of the power function form

    NASA Astrophysics Data System (ADS)

    Shang, Zhen; Sui, Yun-Kang

    2012-12-01

    Based on the independent, continuous and mapping (ICM) method and homogenization method, a research model is constructed to propose and deduce a theorem and corollary from the invariant between the weight filter function and the corresponding stiffness filter function of the form of power function. The efficiency in searching for optimum solution will be raised via the choice of rational filter functions, so the above mentioned results are very important to the further study of structural topology optimization.

  2. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  3. Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering.

    PubMed

    Slepoy, A; Peters, M D; Thompson, A P

    2007-11-30

    Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.

  4. Configuration optimization of space structures

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos; Crivelli, Luis A.; Vandenbelt, David

    1991-01-01

    The objective is to develop a computer aid for the conceptual/initial design of aerospace structures, allowing configurations and shape to be apriori design variables. The topics are presented in viewgraph form and include the following: Kikuchi's homogenization method; a classical shape design problem; homogenization method steps; a 3D mechanical component design example; forming a homogenized finite element; a 2D optimization problem; treatment of volume inequality constraint; algorithms for the volume inequality constraint; object function derivatives--taking advantage of design locality; stiffness variations; variations of potential; and schematics of the optimization problem.

  5. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  6. Development of a fast and feasible spectrum modeling technique for flattening filter free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Woong; Bush, Karl; Mok, Ed

    Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less

  7. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  8. Plant species and functional group combinations affect green roof ecosystem functions.

    PubMed

    Lundholm, Jeremy; Macivor, J Scott; Macdougall, Zachary; Ranalli, Melissa

    2010-03-12

    Green roofs perform ecosystem services such as summer roof temperature reduction and stormwater capture that directly contribute to lower building energy use and potential economic savings. These services are in turn related to ecosystem functions performed by the vegetation layer such as radiation reflection and transpiration, but little work has examined the role of plant species composition and diversity in improving these functions. We used a replicated modular extensive (shallow growing- medium) green roof system planted with monocultures or mixtures containing one, three or five life-forms, to quantify two ecosystem services: summer roof cooling and water capture. We also measured the related ecosystem properties/processes of albedo, evapotranspiration, and the mean and temporal variability of aboveground biomass over four months. Mixtures containing three or five life-form groups, simultaneously optimized several green roof ecosystem functions, outperforming monocultures and single life-form groups, but there was much variation in performance depending on which life-forms were present in the three life-form mixtures. Some mixtures outperformed the best monocultures for water capture, evapotranspiration, and an index combining both water capture and temperature reductions. Combinations of tall forbs, grasses and succulents simultaneously optimized a range of ecosystem performance measures, thus the main benefit of including all three groups was not to maximize any single process but to perform a variety of functions well. Ecosystem services from green roofs can be improved by planting certain life-form groups in combination, directly contributing to climate change mitigation and adaptation strategies. The strong performance by certain mixtures of life-forms, especially tall forbs, grasses and succulents, warrants further investigation into niche complementarity or facilitation as mechanisms governing biodiversity-ecosystem functioning relationships in green roof ecosystems.

  9. Plant Species and Functional Group Combinations Affect Green Roof Ecosystem Functions

    PubMed Central

    Lundholm, Jeremy; MacIvor, J. Scott; MacDougall, Zachary; Ranalli, Melissa

    2010-01-01

    Background Green roofs perform ecosystem services such as summer roof temperature reduction and stormwater capture that directly contribute to lower building energy use and potential economic savings. These services are in turn related to ecosystem functions performed by the vegetation layer such as radiation reflection and transpiration, but little work has examined the role of plant species composition and diversity in improving these functions. Methodology/Principal Findings We used a replicated modular extensive (shallow growing- medium) green roof system planted with monocultures or mixtures containing one, three or five life-forms, to quantify two ecosystem services: summer roof cooling and water capture. We also measured the related ecosystem properties/processes of albedo, evapotranspiration, and the mean and temporal variability of aboveground biomass over four months. Mixtures containing three or five life-form groups, simultaneously optimized several green roof ecosystem functions, outperforming monocultures and single life-form groups, but there was much variation in performance depending on which life-forms were present in the three life-form mixtures. Some mixtures outperformed the best monocultures for water capture, evapotranspiration, and an index combining both water capture and temperature reductions. Combinations of tall forbs, grasses and succulents simultaneously optimized a range of ecosystem performance measures, thus the main benefit of including all three groups was not to maximize any single process but to perform a variety of functions well. Conclusions/Significance Ecosystem services from green roofs can be improved by planting certain life-form groups in combination, directly contributing to climate change mitigation and adaptation strategies. The strong performance by certain mixtures of life-forms, especially tall forbs, grasses and succulents, warrants further investigation into niche complementarity or facilitation as mechanisms governing biodiversity-ecosystem functioning relationships in green roof ecosystems. PMID:20300196

  10. MO-FG-CAMPUS-TeP2-01: A Graph Form ADMM Algorithm for Constrained Quadratic Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X; Belcher, AH; Wiersma, R

    Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimizationmore » and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.« less

  11. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  12. Functional and Structural Optimality in Plant Growth: A Crop Modelling Case Study

    NASA Astrophysics Data System (ADS)

    Caldararu, S.; Purves, D. W.; Smith, M. J.

    2014-12-01

    Simple mechanistic models of vegetation processes are essential both to our understanding of plant behaviour and to our ability to predict future changes in vegetation. One concept that can take us closer to such models is that of plant optimality, the hypothesis that plants aim to achieve an optimal state. Conceptually, plant optimality can be either structural or functional optimality. A structural constraint would mean that plants aim to achieve a certain structural characteristic such as an allometric relationship or nutrient content that allows optimal function. A functional condition refers to plants achieving optimal functionality, in most cases by maximising carbon gain. Functional optimality conditions are applied on shorter time scales and lead to higher plasticity, making plants more adaptable to changes in their environment. In contrast, structural constraints are optimal given the specific environmental conditions that plants are adapted to and offer less flexibility. We exemplify these concepts using a simple model of crop growth. The model represents annual cycles of growth from sowing date to harvest, including both vegetative and reproductive growth and phenology. Structural constraints to growth are represented as an optimal C:N ratio in all plant organs, which drives allocation throughout the vegetative growing stage. Reproductive phenology - i.e. the onset of flowering and grain filling - is determined by a functional optimality condition in the form of maximising final seed mass, so that vegetative growth stops when the plant reaches maximum nitrogen or carbon uptake. We investigate the plants' response to variations in environmental conditions within these two optimality constraints and show that final yield is most affected by changes during vegetative growth which affect the structural constraint.

  13. Optimization Design of Minimum Total Resistance Hull Form Based on CFD Method

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Zhang, Sheng-long; Zhang, Hui

    2018-06-01

    In order to reduce the resistance and improve the hydrodynamic performance of a ship, two hull form design methods are proposed based on the potential flow theory and viscous flow theory. The flow fields are meshed using body-fitted mesh and structured grids. The parameters of the hull modification function are the design variables. A three-dimensional modeling method is used to alter the geometry. The Non-Linear Programming (NLP) method is utilized to optimize a David Taylor Model Basin (DTMB) model 5415 ship under the constraints, including the displacement constraint. The optimization results show an effective reduction of the resistance. The two hull form design methods developed in this study can provide technical support and theoretical basis for designing green ships.

  14. Effects of inclination and eccentricity on optimal trajectories between earth and Venus

    NASA Technical Reports Server (NTRS)

    Gravier, J.-P.; Marchal, C.; Culp, R. D.

    1973-01-01

    The true optimal transfers, including the effects of the inclination and eccentricity of the planets' orbits, between earth and Venus are presented as functions of the corresponding idealized Hohmann transfers. The method of determining the optimal transfers using the calculus of variations is presented. For every possible Hohmann window, specified as a continuous function of the longitude of perihelion of the Hohmann trajectory, the corresponding numerically exact optimal two-impulse transfers are given in graphical form. The cases for which the optimal two-impulse transfer is the absolute optimal, and those for which a three-impulse transfer provides the absolute optimal transfer are indicated. This information furnishes everything necessary for quick and accurate orbit calculations for preliminary Venus mission analysis. This makes it possible to use the actual optimal transfers for advanced planning in place of the standard Hohmann transfers.

  15. Identification of optimal soil hydraulic functions and parameters for predicting soil moisture

    EPA Science Inventory

    We examined the accuracy of several commonly used soil hydraulic functions and associated parameters for predicting observed soil moisture data. We used six combined methods formed by three commonly used soil hydraulic functions – i.e., Brooks and Corey (1964) (BC), Campbell (19...

  16. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  17. A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes

    NASA Astrophysics Data System (ADS)

    Chakraborty, Shankar; Mitra, Ankan

    2018-05-01

    Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.

  18. Optimization of digital designs

    NASA Technical Reports Server (NTRS)

    Miles, Lowell H. (Inventor); Whitaker, Sterling R. (Inventor)

    2009-01-01

    An application specific integrated circuit is optimized by translating a first representation of its digital design to a second representation. The second representation includes multiple syntactic expressions that admit a representation of a higher-order function of base Boolean values. The syntactic expressions are manipulated to form a third representation of the digital design.

  19. A Numerical Comparison of Barrier and Modified Barrier Methods for Large-Scale Bound-Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Nash, Stephen G.; Polyak, R.; Sofer, Ariela

    1994-01-01

    When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.

  20. Structural damage detection-oriented multi-type sensor placement with multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Lin, Jian-Fu; Xu, You-Lin; Law, Siu-Seong

    2018-05-01

    A structural damage detection-oriented multi-type sensor placement method with multi-objective optimization is developed in this study. The multi-type response covariance sensitivity-based damage detection method is first introduced. Two objective functions for optimal sensor placement are then introduced in terms of the response covariance sensitivity and the response independence. The multi-objective optimization problem is formed by using the two objective functions, and the non-dominated sorting genetic algorithm (NSGA)-II is adopted to find the solution for the optimal multi-type sensor placement to achieve the best structural damage detection. The proposed method is finally applied to a nine-bay three-dimensional frame structure. Numerical results show that the optimal multi-type sensor placement determined by the proposed method can avoid redundant sensors and provide satisfactory results for structural damage detection. The restriction on the number of each type of sensors in the optimization can reduce the searching space in the optimization to make the proposed method more effective. Moreover, how to select a most optimal sensor placement from the Pareto solutions via the utility function and the knee point method is demonstrated in the case study.

  1. Combining Multiobjective Optimization and Cluster Analysis to Study Vocal Fold Functional Morphology

    PubMed Central

    Palaparthi, Anil; Riede, Tobias

    2017-01-01

    Morphological design and the relationship between form and function have great influence on the functionality of a biological organ. However, the simultaneous investigation of morphological diversity and function is difficult in complex natural systems. We have developed a multiobjective optimization (MOO) approach in association with cluster analysis to study the form-function relation in vocal folds. An evolutionary algorithm (NSGA-II) was used to integrate MOO with an existing finite element model of the laryngeal sound source. Vocal fold morphology parameters served as decision variables and acoustic requirements (fundamental frequency, sound pressure level) as objective functions. A two-layer and a three-layer vocal fold configuration were explored to produce the targeted acoustic requirements. The mutation and crossover parameters of the NSGA-II algorithm were chosen to maximize a hypervolume indicator. The results were expressed using cluster analysis and were validated against a brute force method. Results from the MOO and the brute force approaches were comparable. The MOO approach demonstrated greater resolution in the exploration of the morphological space. In association with cluster analysis, MOO can efficiently explore vocal fold functional morphology. PMID:24771563

  2. A modified multi-objective particle swarm optimization approach and its application to the design of a deepwater composite riser

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Chen, J.

    2017-09-01

    A modified multi-objective particle swarm optimization method is proposed for obtaining Pareto-optimal solutions effectively. Different from traditional multi-objective particle swarm optimization methods, Kriging meta-models and the trapezoid index are introduced and integrated with the traditional one. Kriging meta-models are built to match expensive or black-box functions. By applying Kriging meta-models, function evaluation numbers are decreased and the boundary Pareto-optimal solutions are identified rapidly. For bi-objective optimization problems, the trapezoid index is calculated as the sum of the trapezoid's area formed by the Pareto-optimal solutions and one objective axis. It can serve as a measure whether the Pareto-optimal solutions converge to the Pareto front. Illustrative examples indicate that to obtain Pareto-optimal solutions, the method proposed needs fewer function evaluations than the traditional multi-objective particle swarm optimization method and the non-dominated sorting genetic algorithm II method, and both the accuracy and the computational efficiency are improved. The proposed method is also applied to the design of a deepwater composite riser example in which the structural performances are calculated by numerical analysis. The design aim was to enhance the tension strength and minimize the cost. Under the buckling constraint, the optimal trade-off of tensile strength and material volume is obtained. The results demonstrated that the proposed method can effectively deal with multi-objective optimizations with black-box functions.

  3. Research on output feedback control

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Kramer, F. S.

    1985-01-01

    In designing fixed order compensators, an output feedback formulation has been adopted by suitably augmenting the system description to include the compensator states. However, the minimization of the performance index over the range of possible compensator descriptions was impeded due to the nonuniqueness of the compensator transfer function. A controller canonical form of the compensator was chosen to reduce the number of free parameters to its minimal number in the optimization. In the MIMO case, the controller form requires a prespecified set of ascending controllability indices. This constraint on the compensator structure is rather innocuous in relation to the increase in convergence rate of the optimization. Moreover, the controller form is easily relatable to a unique controller transfer function description. This structure of the compensator does not require penalizing the compensator states for a nonzero or coupled solution, a problem that occurs when following a standard output feedback synthesis formulation.

  4. Optimizing Cardiovascular Benefits of Exercise: A Review of Rodent Models

    PubMed Central

    Davis, Brittany; Moriguchi, Takeshi; Sumpio, Bauer

    2013-01-01

    Although research unanimously maintains that exercise can ward off cardiovascular disease (CVD), the optimal type, duration, intensity, and combination of forms are yet not clear. In our review of existing rodent-based studies on exercise and cardiovascular health, we attempt to find the optimal forms, intensities, and durations of exercise. Using Scopus and Medline, a literature review of English language comparative journal studies of cardiovascular benefits and exercise was performed. This review examines the existing literature on rodent models of aerobic, anaerobic, and power exercise and compares the benefits of various training forms, intensities, and durations. The rodent studies reviewed in this article correlate with reports on human subjects that suggest regular aerobic exercise can improve cardiac and vascular structure and function, as well as lipid profiles, and reduce the risk of CVD. Findings demonstrate an abundance of rodent-based aerobic studies, but a lack of anaerobic and power forms of exercise, as well as comparisons of these three components of exercise. Thus, further studies must be conducted to determine a truly optimal regimen for cardiovascular health. PMID:24436579

  5. An integrative method for testing form–function linkages and reconstructed evolutionary pathways of masticatory specialization

    PubMed Central

    Tseng, Z. Jack; Flynn, John J.

    2015-01-01

    Morphology serves as a ubiquitous proxy in macroevolutionary studies to identify potential adaptive processes and patterns. Inferences of functional significance of phenotypes or their evolution are overwhelmingly based on data from living taxa. Yet, correspondence between form and function has been tested in only a few model species, and those linkages are highly complex. The lack of explicit methodologies to integrate form and function analyses within a deep-time and phylogenetic context weakens inferences of adaptive morphological evolution, by invoking but not testing form–function linkages. Here, we provide a novel approach to test mechanical properties at reconstructed ancestral nodes/taxa and the strength and direction of evolutionary pathways in feeding biomechanics, in a case study of carnivorous mammals. Using biomechanical profile comparisons that provide functional signals for the separation of feeding morphologies, we demonstrate, using experimental optimization criteria on estimation of strength and direction of functional changes on a phylogeny, that convergence in mechanical properties and degree of evolutionary optimization can be decoupled. This integrative approach is broadly applicable to other clades, by using quantitative data and model-based tests to evaluate interpretations of function from morphology and functional explanations for observed macroevolutionary pathways. PMID:25994295

  6. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  7. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  8. Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions

    NASA Technical Reports Server (NTRS)

    Cohn, S.; Isaacson, E.; Ghil, M.

    1981-01-01

    The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.

  9. Three-Dimensional Path Planning for Uninhabited Combat Aerial Vehicle Based on Predator-Prey Pigeon-Inspired Optimization in Dynamic Environment.

    PubMed

    Zhang, Bo; Duan, Haibin

    2017-01-01

    Three-dimension path planning of uninhabited combat aerial vehicle (UCAV) is a complicated optimal problem, which mainly focused on optimizing the flight route considering the different types of constrains under complex combating environment. A novel predator-prey pigeon-inspired optimization (PPPIO) is proposed to solve the UCAV three-dimension path planning problem in dynamic environment. Pigeon-inspired optimization (PIO) is a new bio-inspired optimization algorithm. In this algorithm, map and compass operator model and landmark operator model are used to search the best result of a function. The prey-predator concept is adopted to improve global best properties and enhance the convergence speed. The characteristics of the optimal path are presented in the form of a cost function. The comparative simulation results show that our proposed PPPIO algorithm is more efficient than the basic PIO, particle swarm optimization (PSO), and different evolution (DE) in solving UCAV three-dimensional path planning problems.

  10. Coordinated Optimization of Visual Cortical Maps (I) Symmetry-based Analysis

    PubMed Central

    Reichl, Lars; Heide, Dominik; Löwel, Siegrid; Crowley, Justin C.; Kaschube, Matthias; Wolf, Fred

    2012-01-01

    In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of orientation columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about a hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference. From basic symmetry assumptions we obtain a comprehensive phenomenological classification of possible inter-map coupling energies and examine representative examples. We show that each individual coupling energy leads to a different class of OP solutions with different correlations among the maps such that inferences about the optimization principle from map layout appear viable. We systematically assess whether quantitative laws resembling experimental observations can result from the coordinated optimization of orientation columns with other feature maps. PMID:23144599

  11. Optimal Resource Allocation for NOMA-TDMA Scheme with α-Fairness in Industrial Internet of Things.

    PubMed

    Sun, Yanjing; Guo, Yiyu; Li, Song; Wu, Dapeng; Wang, Bin

    2018-05-15

    In this paper, a joint non-orthogonal multiple access and time division multiple access (NOMA-TDMA) scheme is proposed in Industrial Internet of Things (IIoT), which allowed multiple sensors to transmit in the same time-frequency resource block using NOMA. The user scheduling, time slot allocation, and power control are jointly optimized in order to maximize the system α -fair utility under transmit power constraint and minimum rate constraint. The optimization problem is nonconvex because of the fractional objective function and the nonconvex constraints. To deal with the original problem, we firstly convert the objective function in the optimization problem into a difference of two convex functions (D.C.) form, and then propose a NOMA-TDMA-DC algorithm to exploit the global optimum. Numerical results show that the NOMA-TDMA scheme significantly outperforms the traditional orthogonal multiple access scheme in terms of both spectral efficiency and user fairness.

  12. Using Optimal Test Assembly Methods for Shortening Patient-Reported Outcome Measures: Development and Validation of the Cochin Hand Function Scale-6: A Scleroderma Patient-Centered Intervention Network Cohort Study.

    PubMed

    Levis, Alexander W; Harel, Daphna; Kwakkenbos, Linda; Carrier, Marie-Eve; Mouthon, Luc; Poiraudeau, Serge; Bartlett, Susan J; Khanna, Dinesh; Malcarne, Vanessa L; Sauve, Maureen; van den Ende, Cornelia H M; Poole, Janet L; Schouffoer, Anne A; Welling, Joep; Thombs, Brett D

    2016-11-01

    To develop and validate a short form of the Cochin Hand Function Scale (CHFS), which measures hand disability, for use in systemic sclerosis, using objective criteria and reproducible techniques. Responses on the 18-item CHFS were obtained from English-speaking patients enrolled in the Scleroderma Patient-Centered Intervention Network Cohort. CHFS unidimensionality was verified using confirmatory factor analysis, and an item response theory model was fit to CHFS items. Optimal test assembly (OTA) methods identified a maximally precise short form for each possible form length between 1 and 17 items. The final short form selected was the form with the least number of items that maintained statistically equivalent convergent validity, compared to the full-length CHFS, with the Health Assessment Questionnaire (HAQ) disability index (DI) and the physical function domain of the 29-item Patient-Reported Outcomes Measurement Information System (PROMIS-29). There were 601 patients included. A 6-item short form of the CHFS (CHFS-6) was selected. The CHFS-6 had a Cronbach's alpha of 0.93. Correlations of the CHFS-6 summed score with HAQ DI (r = 0.79) and PROMIS-29 physical function (r = -0.54) were statistically equivalent to the CHFS (r = 0.81 and r = -0.56). The correlation with the full CHFS was high (r = 0.98). The OTA procedure generated a valid short form of the CHFS with minimal loss of information compared to the full-length form. The OTA method used was based on objective, prespecified criteria, but should be further studied for viability as a general procedure for shortening patient-reported outcome measures in health research. © 2016, American College of Rheumatology.

  13. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  14. Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    2016-12-08

    In neutron multiplicity counting one may fit a curve by minimizing an objective function, χmore » $$2\\atop{n}$$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W -1 is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$$2\\atop{n}$$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.« less

  15. [Study on the choice of functional monomer before preparation of myclobutanil molecularly imprinted polymer].

    PubMed

    Gao, Wen-Hui; Liu, Bo; Li, Xing-Feng; Han, Jun-Hua; Jia, Ying-Min

    2014-03-01

    To prepare myclobutanil molecularly imprinted polymer, a method was established for the choice of the appropriate functional monomer and its dosage. UV spectra was applied to study the combination form, the effect intensity, the optimal concentration ratio and the numbers of binding sites between myclobutanil and methyl acrylic acid (MAA) or acrylamide (AM) functional monomer. The results showed that hydrogen-bonding interaction could be formed between myclobutanil and methyl acrylic acid (MAA) or acrylamide (AM) functional monomer. The pi electron of the triazole ring conjugated double bond in my clobutanil could transit to pi* conjugate antibonding orbital when it absorbed energy. The formation of hydrogen bond could make pi-->pi* absorption band transit. Maximum absorption wavelength produced red shift with the increase in the functional monomer concentration in the system. The research revealed that the optimal concentration ratios between myclobutanil and the two monomers were c(M):c(MAA) = 1:4, c(M):c(AM) = 1:2. Myclobutanil and the both the functional monomers had the bonding ability, and strong bonding force. The prepared molecularly imprinted polymer using AM as a functional monomer had better stability and specificity of recognition for myclobutanil.

  16. Methods of Constructing a Blended Performance Function Suitable for Formation Flight

    NASA Technical Reports Server (NTRS)

    Ryan, Jack

    2017-01-01

    Two methods for constructing performance functions for formation fight-for-drag-reduction suitable for use with an extreme-seeking control system are presented. The first method approximates an a prior measured or estimated drag-reduction performance function by combining real-time measurements of readily available parameters. The parameters are combined with weightings determined from a minimum squares optimization to form a blended performance function.

  17. Usefulness of the Wechsler Intelligence Scale short form for assessing functional outcomes in patients with schizophrenia.

    PubMed

    Sumiyoshi, Chika; Fujino, Haruo; Sumiyoshi, Tomiki; Yasuda, Yuka; Yamamori, Hidenaga; Ohi, Kazutaka; Fujimoto, Michiko; Takeda, Masatoshi; Hashimoto, Ryota

    2016-11-30

    The Wechsler Adult Intelligence Scale (WAIS) has been widely used to assess intellectual functioning not only in healthy adults but also people with psychiatric disorders. The purpose of the study was to develop an optimal WAIS-3 short form (SF) to evaluate intellectual status in patients with schizophrenia. One hundred and fifty patients with schizophrenia and 221 healthy controls entered the study. To select subtests for SFs, following criteria were considered: 1) predictability for the full IQ (FIQ), 2) representativeness for the IQ structure, 3) consistency of subtests across versions, 4) sensitivity to functional outcome measures, 5) conciseness in administration time. First, exploratory factor analysis (EFA) and multiple regression analysis were conducted to select subtests satisfying the first and the second criteria. Then, candidate SFs were nominated based on the third criterion and the coverage of verbal IQ and performance IQ. Finally, the optimality of candidate SFs was evaluated in terms of the fourth and fifth criteria. The results suggest that the dyad of Similarities and Symbol Search was the most optimal satisfying the above criteria. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Optimal Fault-Tolerant Control for Discrete-Time Nonlinear Strict-Feedback Systems Based on Adaptive Critic Design.

    PubMed

    Wang, Zhanshan; Liu, Lei; Wu, Yanming; Zhang, Huaguang

    2018-06-01

    This paper investigates the problem of optimal fault-tolerant control (FTC) for a class of unknown nonlinear discrete-time systems with actuator fault in the framework of adaptive critic design (ACD). A pivotal highlight is the adaptive auxiliary signal of the actuator fault, which is designed to offset the effect of the fault. The considered systems are in strict-feedback forms and involve unknown nonlinear functions, which will result in the causal problem. To solve this problem, the original nonlinear systems are transformed into a novel system by employing the diffeomorphism theory. Besides, the action neural networks (ANNs) are utilized to approximate a predefined unknown function in the backstepping design procedure. Combined the strategic utility function and the ACD technique, a reinforcement learning algorithm is proposed to set up an optimal FTC, in which the critic neural networks (CNNs) provide an approximate structure of the cost function. In this case, it not only guarantees the stability of the systems, but also achieves the optimal control performance as well. In the end, two simulation examples are used to show the effectiveness of the proposed optimal FTC strategy.

  19. Optimal guidance for the space shuttle transition

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.

    1972-01-01

    A guidance method for the space shuttle's transition from hypersonic entry to subsonic cruising flight is presented. The method evolves from a numerical trajectory optimization technique in which kinetic energy and total energy (per unit weight) replace velocity and time in the dynamic equations. This allows the open end-time problem to be transformed to one of fixed terminal energy. In its ultimate form, E-Guidance obtains energy balance (including dynamic-pressure-rate damping) and path length control by angle-of-attack modulation and cross-range control by roll angle modulation. The guidance functions also form the basis for a pilot display of instantaneous maneuver limits and destination. Numerical results illustrate the E-Guidance concept and the optimal trajectories on which it is based.

  20. Minimization of the root of a quadratic functional under a system of affine equality constraints with application to portfolio management

    NASA Astrophysics Data System (ADS)

    Landsman, Zinoviy

    2008-10-01

    We present an explicit closed form solution of the problem of minimizing the root of a quadratic functional subject to a system of affine constraints. The result generalizes Z. Landsman, Minimization of the root of a quadratic functional under an affine equality constraint, J. Comput. Appl. Math. 2007, to appear, see , articles in press, where the optimization problem was solved under only one linear constraint. This is of interest for solving significant problems pertaining to financial economics as well as some classes of feasibility and optimization problems which frequently occur in tomography and other fields. The results are illustrated in the problem of optimal portfolio selection and the particular case when the expected return of finance portfolio is certain is discussed.

  1. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Svečko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  2. Markerless human motion tracking using hierarchical multi-swarm cooperative particle swarm optimization.

    PubMed

    Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah

    2015-01-01

    The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches-Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims.

  3. Scenario based optimization of a container vessel with respect to its projected operating conditions

    NASA Astrophysics Data System (ADS)

    Wagner, Jonas; Binkowski, Eva; Bronsart, Robert

    2014-06-01

    In this paper the scenario based optimization of the bulbous bow of the KRISO Container Ship (KCS) is presented. The optimization of the parametrically modeled vessel is based on a statistically developed operational profile generated from noon-to-noon reports of a comparable 3600 TEU container vessel and specific development functions representing the growth of global economy during the vessels service time. In order to consider uncertainties, statistical fluctuations are added. An analysis of these data lead to a number of most probable upcoming operating conditions (OC) the vessel will stay in the future. According to their respective likeliness an objective function for the evaluation of the optimal design variant of the vessel is derived and implemented within the parametrical optimization workbench FRIENDSHIP Framework. In the following this evaluation is done with respect to vessel's calculated effective power based on the usage of potential flow code. The evaluation shows, that the usage of scenarios within the optimization process has a strong influence on the hull form.

  4. Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection.

    PubMed

    Wang, Xinghu; Hong, Yiguang; Ji, Haibo

    2016-07-01

    The paper studies the distributed optimization problem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost function information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we propose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances.

  5. Application of a derivative-free global optimization algorithm to the derivation of a new time integration scheme for the simulation of incompressible turbulence

    NASA Astrophysics Data System (ADS)

    Alimohammadi, Shahrouz; Cavaglieri, Daniele; Beyhaghi, Pooriya; Bewley, Thomas R.

    2016-11-01

    This work applies a recently developed Derivative-free optimization algorithm to derive a new mixed implicit-explicit (IMEX) time integration scheme for Computational Fluid Dynamics (CFD) simulations. This algorithm allows imposing a specified order of accuracy for the time integration and other important stability properties in the form of nonlinear constraints within the optimization problem. In this procedure, the coefficients of the IMEX scheme should satisfy a set of constraints simultaneously. Therefore, the optimization process, at each iteration, estimates the location of the optimal coefficients using a set of global surrogates, for both the objective and constraint functions, as well as a model of the uncertainty function of these surrogates based on the concept of Delaunay triangulation. This procedure has been proven to converge to the global minimum of the constrained optimization problem provided the constraints and objective functions are twice differentiable. As a result, a new third-order, low-storage IMEX Runge-Kutta time integration scheme is obtained with remarkably fast convergence. Numerical tests are then performed leveraging the turbulent channel flow simulations to validate the theoretical order of accuracy and stability properties of the new scheme.

  6. A new improved artificial bee colony algorithm for ship hull form optimization

    NASA Astrophysics Data System (ADS)

    Huang, Fuxin; Wang, Lijue; Yang, Chi

    2016-04-01

    The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence-based optimization algorithm. Its simplicity of implementation, relatively few parameter settings and promising optimization capability make it widely used in different fields. However, it has problems of slow convergence due to its solution search equation. Here, a new solution search equation based on a combination of the elite solution pool and the block perturbation scheme is proposed to improve the performance of the algorithm. In addition, two different solution search equations are used by employed bees and onlooker bees to balance the exploration and exploitation of the algorithm. The developed algorithm is validated by a set of well-known numerical benchmark functions. It is then applied to optimize two ship hull forms with minimum resistance. The tested results show that the proposed new improved ABC algorithm can outperform the ABC algorithm in most of the tested problems.

  7. Robust optimization of the billet for isothermal local loading transitional region of a Ti-alloy rib-web component based on dual-response surface method

    NASA Astrophysics Data System (ADS)

    Wei, Ke; Fan, Xiaoguang; Zhan, Mei; Meng, Miao

    2018-03-01

    Billet optimization can greatly improve the forming quality of the transitional region in the isothermal local loading forming (ILLF) of large-scale Ti-alloy ribweb components. However, the final quality of the transitional region may be deteriorated by uncontrollable factors, such as the manufacturing tolerance of the preforming billet, fluctuation of the stroke length, and friction factor. Thus, a dual-response surface method (RSM)-based robust optimization of the billet was proposed to address the uncontrollable factors in transitional region of the ILLF. Given that the die underfilling and folding defect are two key factors that influence the forming quality of the transitional region, minimizing the mean and standard deviation of the die underfilling rate and avoiding folding defect were defined as the objective function and constraint condition in robust optimization. Then, the cross array design was constructed, a dual-RSM model was established for the mean and standard deviation of the die underfilling rate by considering the size parameters of the billet and uncontrollable factors. Subsequently, an optimum solution was derived to achieve the robust optimization of the billet. A case study on robust optimization was conducted. Good results were attained for improving the die filling and avoiding folding defect, suggesting that the robust optimization of the billet in the transitional region of the ILLF was efficient and reliable.

  8. Optimization Based Efficiencies in First Order Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Peck, Jeffrey A.; Mahadevan, Sankaran

    2003-01-01

    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  9. Design Optimization of Space Launch Vehicles Using a Genetic Algorithm

    DTIC Science & Technology

    2007-06-01

    function until no improvement in the objective function could be made. The search space is modeled in a geometric form such as a polyhedron . The simplex... database . AeroDesign assumes that there are no boundary layers and that no separation occurs. AeroDesign can analyze either a cone or ogive shape

  10. Estimating Scale Economies and the Optimal Size of School Districts: A Flexible Form Approach

    ERIC Educational Resources Information Center

    Schiltz, Fritz; De Witte, Kristof

    2017-01-01

    This paper investigates estimation methods to model the relationship between school district size, costs per student and the organisation of school districts. We show that the assumptions on the functional form strongly affect the estimated scale economies and offer two possible solutions to allow for more flexibility in the estimation method.…

  11. Numerical Nonlinear Robust Control with Applications to Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    automatically. While optimization and optimal control theory have been widely applied in humanoid robot control, it is not without drawbacks . A blind... drawback of Galerkin-based approaches is the need to successively produce discrete forms, which is difficult to implement in practice. Related...universal function approx- imation ability, these approaches are not without drawbacks . In practice, while a single hidden layer neural network can

  12. Focusing elliptical laser beams

    NASA Astrophysics Data System (ADS)

    Marchant, A. B.

    1984-03-01

    The spot formed by focusing an elliptical laser beam through an ordinary objective lens can be optimized by properly filling the objective lens. Criteria are given for maximizing the central irradiance and the line-spread function. An optimized spot is much less elliptical than the incident laser beam. For beam ellipticities as high as 2:1, this spatial filtering reduces the central irradiance by less than 14 percent.

  13. Optimal Preventive Maintenance Schedule based on Lifecycle Cost and Time-Dependent Reliability

    DTIC Science & Technology

    2011-11-10

    Page 1 of 16 UNCLASSIFIED: Distribution Statement A. Approved for public release. 12IDM-0064 Optimal Preventive Maintenance Schedule based... 1 . INTRODUCTION Customers and product manufacturers demand continued functionality of complex equipment and processes. Degradation of material...Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response

  14. Closed-form solutions for linear regulator-design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.

  15. Shape optimization of road tunnel cross-section by simulated annealing

    NASA Astrophysics Data System (ADS)

    Sobótka, Maciej; Pachnicz, Michał

    2016-06-01

    The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA). The form of a cost function derives from the energetic optimality condition, formulated in the authors' previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.

  16. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less

  17. Optimal control of switching time in switched stochastic systems with multi-switching times and different costs

    NASA Astrophysics Data System (ADS)

    Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian

    2017-08-01

    In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, S; Fan, Q; Lei, Y

    Purpose: In-Water-Output-Ratio (IWOR) plays a significant role in linac-based radiotherapy treatment planning, linking MUs to delivered radiation dose. For an open rectangular field, IWOR depends on both its width and length, and changes rapidly when one of them becomes small. In this study, a universal functional form is proposed to fit the open field IWOR tables in Varian TrueBeam representative datasets for all photon energies. Methods: A novel Generalized Mean formula is first used to estimate the Equivalent Square (ES) for a rectangular field. The formula’s weighting factor and power index are determined by collapsing all data points as muchmore » as possible onto a single curve in IWOR vs. ES plot. The result is then fitted with a novel universal function IWOR=1+b*Log(ES/10cm)/(ES/10cm)^c via a least-square procedure to determine the optimal values for parameters b and c. The maximum relative residual error in IWOR over the entire two-dimensional measurement table with field sizes between 3cm and 40cm is used to evaluate the quality of fit for the function. Results: The two-step fitting strategy works very well in determining the optimal parameter values for open field IWOR of each photon energies in the Varian data-set. Relative residual error ≤0.71% is achieved for all photon energies (including Flattening-Filter-Free modes) with field sizes between 3cm and 40cm. The optimal parameter values change smoothly with regular photon beam quality. Conclusion: The universal functional form fits the Varian TrueBeam open field IWOR measurement tables accurately with small relative residual errors for all photon energies. Therefore, it can be an excellent choice to represent IWOR in absolute dose and MU calculations. The functional form can also be used as a QA/commissioning tool to verify the measured data quality and consistency by checking the IWOR data behavior against the function for new photon energies with arbitrary beam quality.« less

  19. Application of dynamic programming to control khuzestan water resources system

    USGS Publications Warehouse

    Jamshidi, M.; Heidari, M.

    1977-01-01

    An approximate optimization technique based on discrete dynamic programming called discrete differential dynamic programming (DDDP), is employed to obtain the near optimal operation policies of a water resources system in the Khuzestan Province of Iran. The technique makes use of an initial nominal state trajectory for each state variable, and forms corridors around the trajectories. These corridors represent a set of subdomains of the entire feasible domain. Starting with such a set of nominal state trajectories, improvements in objective function are sought within the corridors formed around them. This leads to a set of new nominal trajectories upon which more improvements may be sought. Since optimization is confined to a set of subdomains, considerable savings in memory and computer time are achieved over that of conventional dynamic programming. The Kuzestan water resources system considered in this study is located in southwest Iran, and consists of two rivers, three reservoirs, three hydropower plants, and three irrigable areas. Data and cost benefit functions for the analysis were obtained either from the historical records or from similar studies. ?? 1977.

  20. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    NASA Astrophysics Data System (ADS)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  1. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  2. Optimal Control Inventory Stochastic With Production Deteriorating

    NASA Astrophysics Data System (ADS)

    Affandi, Pardi

    2018-01-01

    In this paper, we are using optimal control approach to determine the optimal rate in production. Most of the inventory production models deal with a single item. First build the mathematical models inventory stochastic, in this model we also assume that the items are in the same store. The mathematical model of the problem inventory can be deterministic and stochastic models. In this research will be discussed how to model the stochastic as well as how to solve the inventory model using optimal control techniques. The main tool in the study problems for the necessary optimality conditions in the form of the Pontryagin maximum principle involves the Hamilton function. So we can have the optimal production rate in a production inventory system where items are subject deterioration.

  3. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  4. Optimal growth trajectories with finite carrying capacity.

    PubMed

    Caravelli, F; Sindoni, L; Caccioli, F; Ududec, C

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  5. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  6. Mean-Variance portfolio optimization by using non constant mean and volatility based on the negative exponential utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, Endang; Halim, Nurfadhlina Bt Abdul; Sukono, Rusyaman, Endang; Supian, Sudradjat

    2017-03-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on the Negative Exponential Utility Function. Non constant mean analyzed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyze some stocks in Indonesia. The expected result is to get the proportion of investment in each stock analyzed

  7. Optimal growth trajectories with finite carrying capacity

    NASA Astrophysics Data System (ADS)

    Caravelli, F.; Sindoni, L.; Caccioli, F.; Ududec, C.

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  8. Tumor cell-derived microparticles: a new form of cancer vaccine.

    PubMed

    Zhang, Huafeng; Huang, Bo

    2015-08-01

    For cancer vaccines, tumor antigen availability is currently not an issue due to technical advances. However, the generation of optimal immune stimulation during vaccination is challenging. We have recently demonstrated that tumor cell-derived microparticles (MP) can function as a new form of potent cancer vaccine by efficiently activating type I interferon pathway in a cGAS/STING dependent manner.

  9. Exploring streamflow response to effective rainfall across event magnitude scale

    Treesearch

    Teemu Kokkonen; Harri Koivusalo; Tuomo Karvonen; Barry Croke; Anthony Jakeman

    2004-01-01

    Sets of flow events from four catchments were selected to study how dynamics in the conversion of effective rainfall into streamflow depends on the event size. The approach taken was to optimize parameters of a linear delay function and effective rainfall series concurrently from precipitation streamflow data without imposing a functional form of the precipitation...

  10. Changes in physical, chemical and functional properties of whey protein isolate (WPI) and sugar beet pectin (SBP) conjugates formed by controlled dry-heating

    USDA-ARS?s Scientific Manuscript database

    A Maillard type reaction in the dry state was utilized to create conjugates between whey protein isolate (WPI) and sugar beet pectin (SBP) to achieve improved functional properties including solubility, colloidal stability and oil-in-water emulsion stability. To optimize the reaction conditions, mi...

  11. Optimal Mortgage Refinancing: A Closed Form Solution.

    PubMed

    Agarwal, Sumit; Driscoll, John C; Laibson, David I

    2013-06-01

    We derive the first closed-form optimal refinancing rule: Refinance when the current mortgage interest rate falls below the original rate by at least [Formula: see text] In this formula W (.) is the Lambert W -function, [Formula: see text] ρ is the real discount rate, λ is the expected real rate of exogenous mortgage repayment, σ is the standard deviation of the mortgage rate, κ/M is the ratio of the tax-adjusted refinancing cost and the remaining mortgage value, and τ is the marginal tax rate. This expression is derived by solving a tractable class of refinancing problems. Our quantitative results closely match those reported by researchers using numerical methods.

  12. Control design based on a linear state function observer

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Craig, Roy R., Jr.

    1992-01-01

    An approach to the design of low-order controllers for large scale systems is proposed. The method is derived from the theory of linear state function observers. First, the realization of a state feedback control law is interpreted as the observation of a linear function of the state vector. The linear state function to be reconstructed is the given control law. Then, based on the derivation for linear state function observers, the observer design is formulated as a parameter optimization problem. The optimization objective is to generate a matrix that is close to the given feedback gain matrix. Based on that matrix, the form of the observer and a new control law can be determined. A four-disk system and a lightly damped beam are presented as examples to demonstrate the applicability and efficacy of the proposed method.

  13. Actor-critic-based optimal tracking for partially unknown nonlinear discrete-time systems.

    PubMed

    Kiumarsi, Bahare; Lewis, Frank L

    2015-01-01

    This paper presents a partially model-free adaptive optimal control solution to the deterministic nonlinear discrete-time (DT) tracking control problem in the presence of input constraints. The tracking error dynamics and reference trajectory dynamics are first combined to form an augmented system. Then, a new discounted performance function based on the augmented system is presented for the optimal nonlinear tracking problem. In contrast to the standard solution, which finds the feedforward and feedback terms of the control input separately, the minimization of the proposed discounted performance function gives both feedback and feedforward parts of the control input simultaneously. This enables us to encode the input constraints into the optimization problem using a nonquadratic performance function. The DT tracking Bellman equation and tracking Hamilton-Jacobi-Bellman (HJB) are derived. An actor-critic-based reinforcement learning algorithm is used to learn the solution to the tracking HJB equation online without requiring knowledge of the system drift dynamics. That is, two neural networks (NNs), namely, actor NN and critic NN, are tuned online and simultaneously to generate the optimal bounded control policy. A simulation example is given to show the effectiveness of the proposed method.

  14. A differentiable reformulation for E-optimal design of experiments in nonlinear dynamic biosystems.

    PubMed

    Telen, Dries; Van Riet, Nick; Logist, Flip; Van Impe, Jan

    2015-06-01

    Informative experiments are highly valuable for estimating parameters in nonlinear dynamic bioprocesses. Techniques for optimal experiment design ensure the systematic design of such informative experiments. The E-criterion which can be used as objective function in optimal experiment design requires the maximization of the smallest eigenvalue of the Fisher information matrix. However, one problem with the minimal eigenvalue function is that it can be nondifferentiable. In addition, no closed form expression exists for the computation of eigenvalues of a matrix larger than a 4 by 4 one. As eigenvalues are normally computed with iterative methods, state-of-the-art optimal control solvers are not able to exploit automatic differentiation to compute the derivatives with respect to the decision variables. In the current paper a reformulation strategy from the field of convex optimization is suggested to circumvent these difficulties. This reformulation requires the inclusion of a matrix inequality constraint involving positive semidefiniteness. In this paper, this positive semidefiniteness constraint is imposed via Sylverster's criterion. As a result the maximization of the minimum eigenvalue function can be formulated in standard optimal control solvers through the addition of nonlinear constraints. The presented methodology is successfully illustrated with a case study from the field of predictive microbiology. Copyright © 2015. Published by Elsevier Inc.

  15. Educational Attainment and Adult Mortality in the United States: A Systematic Analysis of Functional Form*

    PubMed Central

    Montez, Jennifer Karas; Hummer, Robert A.; Hayward, Mark D.

    2012-01-01

    A vast literature has documented the inverse association between educational attainment and U.S. adult mortality risk, but given little attention to identifying the optimal functional form of the association. A theoretical explanation of the association hinges on our ability to empirically describe it. Using the 1979–1998 National Longitudinal Mortality Study for non-Hispanic white and black adults aged 25–100 years during the mortality follow-up period (N=1,008,215), we evaluated 13 functional forms across race-gender-age subgroups to determine which form(s) best captured the association. Results revealed that a functional form that includes a linear decline in mortality risk from 0–11 years of education, followed by a step-change reduction in mortality risk upon attainment of a high school diploma, at which point mortality risk resumes a linear decline but with a steeper slope than that prior to a high school diploma was generally preferred. The findings provide important clues for theoretical development of explanatory mechanisms: an explanation for the selected functional form may require integrating a credentialist perspective to explain the step-change reduction in mortality risk upon attainment of a high school diploma, with a human capital perspective to explain the linear declines before and after a high school diploma. PMID:22246797

  16. Numerical approach of collision avoidance and optimal control on robotic manipulators

    NASA Technical Reports Server (NTRS)

    Wang, Jyhshing Jack

    1990-01-01

    Collision-free optimal motion and trajectory planning for robotic manipulators are solved by a method of sequential gradient restoration algorithm. Numerical examples of a two degree-of-freedom (DOF) robotic manipulator are demonstrated to show the excellence of the optimization technique and obstacle avoidance scheme. The obstacle is put on the midway, or even further inward on purpose, of the previous no-obstacle optimal trajectory. For the minimum-time purpose, the trajectory grazes by the obstacle and the minimum-time motion successfully avoids the obstacle. The minimum-time is longer for the obstacle avoidance cases than the one without obstacle. The obstacle avoidance scheme can deal with multiple obstacles in any ellipsoid forms by using artificial potential fields as penalty functions via distance functions. The method is promising in solving collision-free optimal control problems for robotics and can be applied to any DOF robotic manipulators with any performance indices and mobile robots as well. Since this method generates optimum solution based on Pontryagin Extremum Principle, rather than based on assumptions, the results provide a benchmark against which any optimization techniques can be measured.

  17. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  18. Maximally Informative Stimuli and Tuning Curves for Sigmoidal Rate-Coding Neurons and Populations

    NASA Astrophysics Data System (ADS)

    McDonnell, Mark D.; Stocks, Nigel G.

    2008-08-01

    A general method for deriving maximally informative sigmoidal tuning curves for neural systems with small normalized variability is presented. The optimal tuning curve is a nonlinear function of the cumulative distribution function of the stimulus and depends on the mean-variance relationship of the neural system. The derivation is based on a known relationship between Shannon’s mutual information and Fisher information, and the optimality of Jeffrey’s prior. It relies on the existence of closed-form solutions to the converse problem of optimizing the stimulus distribution for a given tuning curve. It is shown that maximum mutual information corresponds to constant Fisher information only if the stimulus is uniformly distributed. As an example, the case of sub-Poisson binomial firing statistics is analyzed in detail.

  19. Passaged adult chondrocytes can form engineered cartilage with functional mechanical properties: a canine model.

    PubMed

    Ng, Kenneth W; Lima, Eric G; Bian, Liming; O'Conor, Christopher J; Jayabalan, Prakash S; Stoker, Aaron M; Kuroki, Keiichi; Cook, Cristi R; Ateshian, Gerard A; Cook, James L; Hung, Clark T

    2010-03-01

    It was hypothesized that previously optimized serum-free culture conditions for juvenile bovine chondrocytes could be adapted to generate engineered cartilage with physiologic mechanical properties in a preclinical, adult canine model. Primary or passaged (using growth factors) adult chondrocytes from three adult dogs were encapsulated in agarose, and cultured in serum-free media with transforming growth factor-beta3. After 28 days in culture, engineered cartilage formed by primary chondrocytes exhibited only small increases in glycosaminoglycan content. However, all passaged chondrocytes on day 28 elaborated a cartilage matrix with compressive properties and glycosaminoglycan content in the range of native adult canine cartilage values. A preliminary biocompatibility study utilizing chondral and osteochondral constructs showed no gross or histological signs of rejection, with all implanted constructs showing excellent integration with surrounding cartilage and subchondral bone. This study demonstrates that adult canine chondrocytes can form a mechanically functional, biocompatible engineered cartilage tissue under optimized culture conditions. The encouraging findings of this work highlight the potential for tissue engineering strategies using adult chondrocytes in the clinical treatment of cartilage defects.

  20. A weak Hamiltonian finite element method for optimal guidance of an advanced launch vehicle

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Calise, Anthony J.; Bless, Robert R.; Leung, Martin

    1989-01-01

    A temporal finite-element method based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables, which are expanded in terms of nodal values and simple shape functions. Time derivatives of the states and costates do not appear in the governing variational equation; the only quantities whose time derivatives appear therein are virtual states and virtual costates. Numerical results are presented for an elementary trajectory optimization problem; they show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The feasibility of this approach for real-time guidance applications is evaluated. A simplified model for an advanced launch vehicle application that is suitable for finite-element solution is presented.

  1. Optimistic expectations in early marriage: a resource or vulnerability for adaptive relationship functioning?

    PubMed

    Neff, Lisa A; Geers, Andrew L

    2013-07-01

    Do optimistic expectations facilitate or hinder adaptive responses to relationship challenges? Traditionally, optimism has been characterized as a resource that encourages positive coping efforts within relationships. Yet, some work suggests optimism can be a liability, as expecting the best may prevent individuals from taking proactive steps when confronted with difficulties. To reconcile these perspectives, the current article argues that greater attention must be given to the way in which optimistic expectancies are conceptualized. Whereas generalized dispositional optimism may predict constructive responses to relationship difficulties, more focused relationship-specific forms of optimism may predict poor coping responses. A multi-method, longitudinal study of newly married couples confirmed that spouses higher in dispositional optimism (a) reported engaging in more positive problem-solving behaviors on days in which they experienced greater relationship conflict, (b) were observed to display more constructive problem-solving behaviors when discussing important marital issues with their partner in the lab, and (c) experienced fewer declines in marital well-being over the 1st year of marriage. Conversely, spouses higher in relationship-specific optimism (a) reported engaging in fewer constructive problem-solving behaviors on high conflict days, (b) were observed to exhibit worse problem-solving behaviors in the lab-particularly when discussing marital issues of greater importance-and (c) experienced steeper declines in marital well-being over time. All findings held controlling for self-esteem and neuroticism. Together, results suggest that whereas global forms of optimism may represent a relationship asset, specific forms of optimism can place couples at risk for marital deterioration. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. Research on bulbous bow optimization based on the improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng-long; Zhang, Bao-ji; Tezdogan, Tahsin; Xu, Le-ping; Lai, Yu-yang

    2017-08-01

    In order to reduce the total resistance of a hull, an optimization framework for the bulbous bow optimization was presented. The total resistance in calm water was selected as the objective function, and the overset mesh technique was used for mesh generation. RANS method was used to calculate the total resistance of the hull. In order to improve the efficiency and smoothness of the geometric reconstruction, the arbitrary shape deformation (ASD) technique was introduced to change the shape of the bulbous bow. To improve the global search ability of the particle swarm optimization (PSO) algorithm, an improved particle swarm optimization (IPSO) algorithm was proposed to set up the optimization model. After a series of optimization analyses, the optimal hull form was found. It can be concluded that the simulation based design framework built in this paper is a promising method for bulbous bow optimization.

  3. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  4. Optimizing structure of complex technical system by heterogeneous vector criterion in interval form

    NASA Astrophysics Data System (ADS)

    Lysenko, A. V.; Kochegarov, I. I.; Yurkov, N. K.; Grishko, A. K.

    2018-05-01

    The article examines the methods of development and multi-criteria choice of the preferred structural variant of the complex technical system at the early stages of its life cycle in the absence of sufficient knowledge of parameters and variables for optimizing this structure. The suggested methods takes into consideration the various fuzzy input data connected with the heterogeneous quality criteria of the designed system and the parameters set by their variation range. The suggested approach is based on the complex use of methods of interval analysis, fuzzy sets theory, and the decision-making theory. As a result, the method for normalizing heterogeneous quality criteria has been developed on the basis of establishing preference relations in the interval form. The method of building preferential relations in the interval form on the basis of the vector of heterogeneous quality criteria suggest the use of membership functions instead of the coefficients considering the criteria value. The former show the degree of proximity of the realization of the designed system to the efficient or Pareto optimal variants. The study analyzes the example of choosing the optimal variant for the complex system using heterogeneous quality criteria.

  5. Comments on "The multisynapse neural network and its application to fuzzy clustering".

    PubMed

    Yu, Jian; Hao, Pengwei

    2005-05-01

    In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.

  6. Optimal Frequency-Domain System Realization with Weighting

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Maghami, Peiman G.

    1999-01-01

    Several approaches are presented to identify an experimental system model directly from frequency response data. The formulation uses a matrix-fraction description as the model structure. Frequency weighting such as exponential weighting is introduced to solve a weighted least-squares problem to obtain the coefficient matrices for the matrix-fraction description. A multi-variable state-space model can then be formed using the coefficient matrices of the matrix-fraction description. Three different approaches are introduced to fine-tune the model using nonlinear programming methods to minimize the desired cost function. The first method uses an eigenvalue assignment technique to reassign a subset of system poles to improve the identified model. The second method deals with the model in the real Schur or modal form, reassigns a subset of system poles, and adjusts the columns (rows) of the input (output) influence matrix using a nonlinear optimizer. The third method also optimizes a subset of poles, but the input and output influence matrices are refined at every optimization step through least-squares procedures.

  7. Dynamic optimization approach for integrated supplier selection and tracking control of single product inventory system with product discount

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Heru Tjahjana, R.

    2017-01-01

    In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.

  8. The pseudo-Boolean optimization approach to form the N-version software structure

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.

  9. Optimal exploitation strategies for an animal population in a Markovian environment: A theory and an example

    USGS Publications Warehouse

    Anderson, D.R.

    1975-01-01

    Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.

  10. Improvements on a non-invasive, parameter-free approach to inverse form finding

    NASA Astrophysics Data System (ADS)

    Landkammer, P.; Caspari, M.; Steinmann, P.

    2017-08-01

    Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2 )-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.

  11. Improvements on a non-invasive, parameter-free approach to inverse form finding

    NASA Astrophysics Data System (ADS)

    Landkammer, P.; Caspari, M.; Steinmann, P.

    2018-04-01

    Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2)-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.

  12. Engineering of the function of diamond-like carbon binding peptides through structural design.

    PubMed

    Gabryelczyk, Bartosz; Szilvay, Géza R; Singh, Vivek K; Mikkilä, Joona; Kostiainen, Mauri A; Koskinen, Jari; Linder, Markus B

    2015-02-09

    The use of phage display to select material-specific peptides provides a general route towards modification and functionalization of surfaces and interfaces. However, a rational structural engineering of the peptides for optimal affinity is typically not feasible because of insufficient structure-function understanding. Here, we investigate the influence of multivalency of diamond-like carbon (DLC) binding peptides on binding characteristics. We show that facile linking of peptides together using different lengths of spacers and multivalency leads to a tuning of affinity and kinetics. Notably, increased length of spacers in divalent systems led to significantly increased affinities. Making multimers influenced also kinetic aspects of surface competition. Additionally, the multivalent peptides were applied as surface functionalization components for a colloidal form of DLC. The work suggests the use of a set of linking systems to screen parameters for functional optimization of selected material-specific peptides.

  13. Eigenvectors of optimal color spectra.

    PubMed

    Flinkman, Mika; Laamanen, Hannu; Tuomela, Jukka; Vahimaa, Pasi; Hauta-Kasari, Markku

    2013-09-01

    Principal component analysis (PCA) and weighted PCA were applied to spectra of optimal colors belonging to the outer surface of the object-color solid or to so-called MacAdam limits. The correlation matrix formed from this data is a circulant matrix whose biggest eigenvalue is simple and the corresponding eigenvector is constant. All other eigenvalues are double, and the eigenvectors can be expressed with trigonometric functions. Found trigonometric functions can be used as a general basis to reconstruct all possible smooth reflectance spectra. When the spectral data are weighted with an appropriate weight function, the essential part of the color information is compressed to the first three components and the shapes of the first three eigenvectors correspond to one achromatic response function and to two chromatic response functions, the latter corresponding approximately to Munsell opponent-hue directions 9YR-9B and 2BG-2R.

  14. Toward an Integration of Deep Learning and Neuroscience

    PubMed Central

    Marblestone, Adam H.; Wayne, Greg; Kording, Konrad P.

    2016-01-01

    Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses. PMID:27683554

  15. Transmission loss optimization in acoustic sandwich panels

    NASA Astrophysics Data System (ADS)

    Makris, S. E.; Dym, C. L.; MacGregor Smith, J.

    1986-06-01

    Considering the sound transmission loss (TL) of a sandwich panel as the single objective, different optimization techniques are examined and a sophisticated computer program is used to find the optimum TL. Also, for one of the possible case studies such as core optimization, closed-form expressions are given between TL and the core-design variables for different sets of skins. The significance of these functional relationships lies in the fact that the panel designer can bypass the necessity of using a sophisticated software package in order to assess explicitly the dependence of the TL on core thickness and density.

  16. Optimal image alignment with random projections of manifolds: algorithm and geometric analysis.

    PubMed

    Kokiopoulou, Effrosyni; Kressner, Daniel; Frossard, Pascal

    2011-06-01

    This paper addresses the problem of image alignment based on random measurements. Image alignment consists of estimating the relative transformation between a query image and a reference image. We consider the specific problem where the query image is provided in compressed form in terms of linear measurements captured by a vision sensor. We cast the alignment problem as a manifold distance minimization problem in the linear subspace defined by the measurements. The transformation manifold that represents synthesis of shift, rotation, and isotropic scaling of the reference image can be given in closed form when the reference pattern is sparsely represented over a parametric dictionary. We show that the objective function can then be decomposed as the difference of two convex functions (DC) in the particular case where the dictionary is built on Gaussian functions. Thus, the optimization problem becomes a DC program, which in turn can be solved globally by a cutting plane method. The quality of the solution is typically affected by the number of random measurements and the condition number of the manifold that describes the transformations of the reference image. We show that the curvature, which is closely related to the condition number, remains bounded in our image alignment problem, which means that the relative transformation between two images can be determined optimally in a reduced subspace.

  17. Performance of an Optimally Tuned Range-Separated Hybrid Functional for 0-0 Electronic Excitation Energies.

    PubMed

    Jacquemin, Denis; Moore, Barry; Planchat, Aurélien; Adamo, Carlo; Autschbach, Jochen

    2014-04-08

    Using a set of 40 conjugated molecules, we assess the performance of an "optimally tuned" range-separated hybrid functional in reproducing the experimental 0-0 energies. The selected protocol accounts for the impact of solvation using a corrected linear-response continuum approach and vibrational corrections through calculations of the zero-point energies of both ground and excited-states and provides basis set converged data thanks to the systematic use of diffuse-containing atomic basis sets at all computational steps. It turns out that an optimally tuned long-range corrected hybrid form of the Perdew-Burke-Ernzerhof functional, LC-PBE*, delivers both the smallest mean absolute error (0.20 eV) and standard deviation (0.15 eV) of all tested approaches, while the obtained correlation (0.93) is large but remains slightly smaller than its M06-2X counterpart (0.95). In addition, the efficiency of two other recently developed exchange-correlation functionals, namely SOGGA11-X and ωB97X-D, has been determined in order to allow more complete comparisons with previously published data.

  18. A new method for designing dual foil electron beam forming systems. II. Feasibility of practical implementation of the method

    NASA Astrophysics Data System (ADS)

    Adrich, Przemysław

    2016-05-01

    In Part I of this work a new method for designing dual foil electron beam forming systems was introduced. In this method, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of system performance in function of its parameters. At each point of the scan, Monte Carlo method is used to calculate the off-axis dose profile in water taking into account detailed and complete geometry of the system. The new method, while being computationally intensive, minimizes the involvement of the designer. In this Part II paper, feasibility of practical implementation of the new method is demonstrated. For this, a prototype software tools were developed and applied to solve a real life design problem. It is demonstrated that system optimization can be completed within few hours time using rather moderate computing resources. It is also demonstrated that, perhaps for the first time, the designer can gain deep insight into system behavior, such that the construction can be simultaneously optimized in respect to a number of functional characteristics besides the flatness of the off-axis dose profile. In the presented example, the system is optimized in respect to both, flatness of the off-axis dose profile and the beam transmission. A number of practical issues related to application of the new method as well as its possible extensions are discussed.

  19. Robust Design of Sheet Metal Forming Process Based on Kriging Metamodel

    NASA Astrophysics Data System (ADS)

    Xie, Yanmin

    2011-08-01

    Nowadays, sheet metal forming processes design is not a trivial task due to the complex issues to be taken into account (conflicting design goals, complex shapes forming and so on). Optimization methods have also been widely applied in sheet metal forming. Therefore, proper design methods to reduce time and costs have to be developed mostly based on computer aided procedures. At the same time, the existence of variations during manufacturing processes significantly may influence final product quality, rendering non-robust optimal solutions. In this paper, a small size of design of experiments is conducted to investigate how a stochastic behavior of noise factors affects drawing quality. The finite element software (LS_DYNA) is used to simulate the complex sheet metal stamping processes. The Kriging metamodel is adopted to map the relation between input process parameters and part quality. Robust design models for sheet metal forming process integrate adaptive importance sampling with Kriging model, in order to minimize impact of the variations and achieve reliable process parameters. In the adaptive sample, an improved criterion is used to provide direction in which additional training samples can be added to better the Kriging model. Nonlinear functions as test functions and a square stamping example (NUMISHEET'93) are employed to verify the proposed method. Final results indicate application feasibility of the aforesaid method proposed for multi-response robust design.

  20. Selection of optimal spectral sensitivity functions for color filter arrays.

    PubMed

    Parmar, Manu; Reeves, Stanley J

    2010-12-01

    A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.

  1. Optimization of an electromagnetic linear actuator using a network and a finite element model

    NASA Astrophysics Data System (ADS)

    Neubert, Holger; Kamusella, Alfred; Lienig, Jens

    2011-03-01

    Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.

  2. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.

  3. Solving Fuzzy Optimization Problem Using Hybrid Ls-Sa Method

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian

    2011-06-01

    Fuzzy optimization problem has been one of the most and prominent topics inside the broad area of computational intelligent. It's especially relevant in the filed of fuzzy non-linear programming. It's application as well as practical realization can been seen in all the real world problems. In this paper a large scale non-linear fuzzy programming problem has been solved by hybrid optimization techniques of Line Search (LS), Simulated Annealing (SA) and Pattern Search (PS). As industrial production planning problem with cubic objective function, 8 decision variables and 29 constraints has been solved successfully using LS-SA-PS hybrid optimization techniques. The computational results for the objective function respect to vagueness factor and level of satisfaction has been provided in the form of 2D and 3D plots. The outcome is very promising and strongly suggests that the hybrid LS-SA-PS algorithm is very efficient and productive in solving the large scale non-linear fuzzy programming problem.

  4. A class of stochastic optimization problems with one quadratic & several linear objective functions and extended portfolio selection model

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Li, Jun

    2002-09-01

    In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.

  5. Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Le Doussal, Pierre

    2014-01-01

    Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.

  6. Pediatric Burn Reconstruction: Focus on Evidence.

    PubMed

    Fisher, Mark

    2017-10-01

    In this article, the author surveys the best available evidence to guide decision-making in pediatric burn reconstruction. Evidence-based protocols are examined in the context of optimizing form and function in children who have sustained burn injury. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,

    2000-01-01

    Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.

  8. Neural-Network-Based Robust Optimal Tracking Control for MIMO Discrete-Time Systems With Unknown Uncertainty Using Adaptive Critic Design.

    PubMed

    Liu, Lei; Wang, Zhanshan; Zhang, Huaguang

    2018-04-01

    This paper is concerned with the robust optimal tracking control strategy for a class of nonlinear multi-input multi-output discrete-time systems with unknown uncertainty via adaptive critic design (ACD) scheme. The main purpose is to establish an adaptive actor-critic control method, so that the cost function in the procedure of dealing with uncertainty is minimum and the closed-loop system is stable. Based on the neural network approximator, an action network is applied to generate the optimal control signal and a critic network is used to approximate the cost function, respectively. In contrast to the previous methods, the main features of this paper are: 1) the ACD scheme is integrated into the controllers to cope with the uncertainty and 2) a novel cost function, which is not in quadric form, is proposed so that the total cost in the design procedure is reduced. It is proved that the optimal control signals and the tracking errors are uniformly ultimately bounded even when the uncertainty exists. Finally, a numerical simulation is developed to show the effectiveness of the present approach.

  9. Optimal control problems with mixed control-phase variable equality and inequality constraints

    NASA Technical Reports Server (NTRS)

    Makowski, K.; Neustad, L. W.

    1974-01-01

    In this paper, necessary conditions are obtained for optimal control problems containing equality constraints defined in terms of functions of the control and phase variables. The control system is assumed to be characterized by an ordinary differential equation, and more conventional constraints, including phase inequality constraints, are also assumed to be present. Because the first-mentioned equality constraint must be satisfied for all t (the independent variable of the differential equation) belonging to an arbitrary (prescribed) measurable set, this problem gives rise to infinite-dimensional equality constraints. To obtain the necessary conditions, which are in the form of a maximum principle, an implicit-function-type theorem in Banach spaces is derived.

  10. Sequence-controlled RNA self-processing: computational design, biochemical analysis, and visualization by AFM

    PubMed Central

    Petkovic, Sonja; Badelt, Stefan; Flamm, Christoph; Delcea, Mihaela

    2015-01-01

    Reversible chemistry allowing for assembly and disassembly of molecular entities is important for biological self-organization. Thus, ribozymes that support both cleavage and formation of phosphodiester bonds may have contributed to the emergence of functional diversity and increasing complexity of regulatory RNAs in early life. We have previously engineered a variant of the hairpin ribozyme that shows how ribozymes may have circularized or extended their own length by forming concatemers. Using the Vienna RNA package, we now optimized this hairpin ribozyme variant and selected four different RNA sequences that were expected to circularize more efficiently or form longer concatemers upon transcription. (Two-dimensional) PAGE analysis confirms that (i) all four selected ribozymes are catalytically active and (ii) high yields of cyclic species are obtained. AFM imaging in combination with RNA structure prediction enabled us to calculate the distributions of monomers and self-concatenated dimers and trimers. Our results show that computationally optimized molecules do form reasonable amounts of trimers, which has not been observed for the original system so far, and we demonstrate that the combination of theoretical prediction, biochemical and physical analysis is a promising approach toward accurate prediction of ribozyme behavior and design of ribozymes with predefined functions. PMID:25999318

  11. An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, B.G.

    1999-11-11

    The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.

  12. Smart Phase Tuning in Microwave Photonic Integrated Circuits Toward Automated Frequency Multiplication by Design

    NASA Astrophysics Data System (ADS)

    Nabavi, N.

    2018-07-01

    The author investigates the monitoring methods for fine adjustment of the previously proposed on-chip architecture for frequency multiplication and translation of harmonics by design. Digital signal processing (DSP) algorithms are utilized to create an optimized microwave photonic integrated circuit functionality toward automated frequency multiplication. The implemented DSP algorithms are formed on discrete Fourier transform and optimization-based algorithms (Greedy and gradient-based algorithms), which are analytically derived and numerically compared based on the accuracy and speed of convergence criteria.

  13. Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations

    PubMed Central

    Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad

    2013-01-01

    Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194

  14. Direct aperture optimization using an inverse form of back-projection.

    PubMed

    Zhu, Xiaofeng; Cullip, Timothy; Tracton, Gregg; Tang, Xiaoli; Lian, Jun; Dooley, John; Chang, Sha X

    2014-03-06

    Direct aperture optimization (DAO) has been used to produce high dosimetric quality intensity-modulated radiotherapy (IMRT) treatment plans with fast treatment delivery by directly modeling the multileaf collimator segment shapes and weights. To improve plan quality and reduce treatment time for our in-house treatment planning system, we implemented a new DAO approach without using a global objective function (GFO). An index concept is introduced as an inverse form of back-projection used in the CT multiplicative algebraic reconstruction technique (MART). The index, introduced for IMRT optimization in this work, is analogous to the multiplicand in MART. The index is defined as the ratio of the optima over the current. It is assigned to each voxel and beamlet to optimize the fluence map. The indices for beamlets and segments are used to optimize multileaf collimator (MLC) segment shapes and segment weights, respectively. Preliminary data show that without sacrificing dosimetric quality, the implementation of the DAO reduced average IMRT treatment time from 13 min to 8 min for the prostate, and from 15 min to 9 min for the head and neck using our in-house treatment planning system PlanUNC. The DAO approach has also shown promise in optimizing rotational IMRT with burst mode in a head and neck test case.

  15. Computer Optimization of Biodegradable Nanoparticles Fabricated by Dispersion Polymerization.

    PubMed

    Akala, Emmanuel O; Adesina, Simeon; Ogunwuyi, Oluwaseun

    2015-12-22

    Quality by design (QbD) in the pharmaceutical industry involves designing and developing drug formulations and manufacturing processes which ensure predefined drug product specifications. QbD helps to understand how process and formulation variables affect product characteristics and subsequent optimization of these variables vis-à-vis final specifications. Statistical design of experiments (DoE) identifies important parameters in a pharmaceutical dosage form design followed by optimizing the parameters with respect to certain specifications. DoE establishes in mathematical form the relationships between critical process parameters together with critical material attributes and critical quality attributes. We focused on the fabrication of biodegradable nanoparticles by dispersion polymerization. Aided by a statistical software, d-optimal mixture design was used to vary the components (crosslinker, initiator, stabilizer, and macromonomers) to obtain twenty nanoparticle formulations (PLLA-based nanoparticles) and thirty formulations (poly-ɛ-caprolactone-based nanoparticles). Scheffe polynomial models were generated to predict particle size (nm), zeta potential, and yield (%) as functions of the composition of the formulations. Simultaneous optimizations were carried out on the response variables. Solutions were returned from simultaneous optimization of the response variables for component combinations to (1) minimize nanoparticle size; (2) maximize the surface negative zeta potential; and (3) maximize percent yield to make the nanoparticle fabrication an economic proposition.

  16. Phonon optimized interatomic potential for aluminum

    NASA Astrophysics Data System (ADS)

    Muraleedharan, Murali Gopal; Rohskopf, Andrew; Yang, Vigor; Henry, Asegun

    2017-12-01

    We address the problem of generating a phonon optimized interatomic potential (POP) for aluminum. The POP methodology, which has already been shown to work for semiconductors such as silicon and germanium, uses an evolutionary strategy based on a genetic algorithm (GA) to optimize the free parameters in an empirical interatomic potential (EIP). For aluminum, we used the Vashishta functional form. The training data set was generated ab initio, consisting of forces, energy vs. volume, stresses, and harmonic and cubic force constants obtained from density functional theory (DFT) calculations. Existing potentials for aluminum, such as the embedded atom method (EAM) and charge-optimized many-body (COMB3) potential, show larger errors when the EIP forces are compared with those predicted by DFT, and thus they are not particularly well suited for reproducing phonon properties. Using a comprehensive Vashishta functional form, which involves short and long-ranged interactions, as well as three-body terms, we were able to better capture interactions that reproduce phonon properties accurately. Furthermore, the Vashishta potential is flexible enough to be extended to Al2O3 and the interface between Al-Al2O3, which is technologically important for combustion of solid Al nano powders. The POP developed here is tested for accuracy by comparing phonon thermal conductivity accumulation plots, density of states, and dispersion relations with DFT results. It is shown to perform well in molecular dynamics (MD) simulations as well, where the phonon thermal conductivity is calculated via the Green-Kubo relation. The results are within 10% of the values obtained by solving the Boltzmann transport equation (BTE), employing Fermi's Golden Rule to predict the phonon-phonon relaxation times.

  17. Mineral inversion for element capture spectroscopy logging based on optimization theory

    NASA Astrophysics Data System (ADS)

    Zhao, Jianpeng; Chen, Hui; Yin, Lu; Li, Ning

    2017-12-01

    Understanding the mineralogical composition of a formation is an essential key step in the petrophysical evaluation of petroleum reservoirs. Geochemical logging tools can provide quantitative measurements of a wide range of elements. In this paper, element capture spectroscopy (ECS) was taken as an example and an optimization method was adopted to solve the mineral inversion problem for ECS. This method used the converting relationship between elements and minerals as response equations and took into account the statistical uncertainty of the element measurements and established an optimization function for ECS. Objective function value and reconstructed elemental logs were used to check the robustness and reliability of the inversion method. Finally, the inversion mineral results had a good agreement with x-ray diffraction laboratory data. The accurate conversion of elemental dry weights to mineral dry weights formed the foundation for the subsequent applications based on ECS.

  18. Optimal Portfolio Selection Under Concave Price Impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Jin, E-mail: jinma@usc.edu; Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk; Xu Jing, E-mail: xujing8023@yahoo.com.cn

    2013-06-15

    In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solutionmore » to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.« less

  19. Active vibration mitigation of distributed parameter, smart-type structures using Pseudo-Feedback Optimal Control (PFOC)

    NASA Technical Reports Server (NTRS)

    Patten, W. N.; Robertshaw, H. H.; Pierpont, D.; Wynn, R. H.

    1989-01-01

    A new, near-optimal feedback control technique is introduced that is shown to provide excellent vibration attenuation for those distributed parameter systems that are often encountered in the areas of aeroservoelasticity and large space systems. The technique relies on a novel solution methodology for the classical optimal control problem. Specifically, the quadratic regulator control problem for a flexible vibrating structure is first cast in a weak functional form that admits an approximate solution. The necessary conditions (first-order) are then solved via a time finite-element method. The procedure produces a low dimensional, algebraic parameterization of the optimal control problem that provides a rigorous basis for a discrete controller with a first-order like hold output. Simulation has shown that the algorithm can successfully control a wide variety of plant forms including multi-input/multi-output systems and systems exhibiting significant nonlinearities. In order to firmly establish the efficacy of the algorithm, a laboratory control experiment was implemented to provide planar (bending) vibration attenuation of a highly flexible beam (with a first clamped-free mode of approximately 0.5 Hz).

  20. Efficient Transition State Optimization of Periodic Structures through Automated Relaxed Potential Energy Surface Scans.

    PubMed

    Plessow, Philipp N

    2018-02-13

    This work explores how constrained linear combinations of bond lengths can be used to optimize transition states in periodic structures. Scanning of constrained coordinates is a standard approach for molecular codes with localized basis functions, where a full set of internal coordinates is used for optimization. Common plane wave-codes for periodic boundary conditions almost exlusively rely on Cartesian coordinates. An implementation of constrained linear combinations of bond lengths with Cartesian coordinates is described. Along with an optimization of the value of the constrained coordinate toward the transition states, this allows transition optimization within a single calculation. The approach is suitable for transition states that can be well described in terms of broken and formed bonds. In particular, the implementation is shown to be effective and efficient in the optimization of transition states in zeolite-catalyzed reactions, which have high relevance in industrial processes.

  1. An investigation of using an RQP based method to calculate parameter sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.

  2. Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Sen, S. K.

    2007-01-01

    Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *

  3. A unified free-form representation applied to the shape optimization of the hohlraum with octahedral 6 laser entrance holes

    NASA Astrophysics Data System (ADS)

    Jiang, Shaoen; Huang, Yunbao; Jing, Longfei; Li, Haiyan; Huang, Tianxuan; Ding, Yongkun

    2016-01-01

    The hohlraum is very crucial for indirect laser driven Inertial Confinement Fusion. Usually, its shape is designed as sphere, cylinder, or rugby with some kind of fixed functions, such as ellipse or parabola. Recently, a spherical hohlraum with octahedral 6 laser entrance holes (LEHs) has been presented with high flux symmetry [Lan et al., Phys. Plasmas 21, 010704 (2014); 21, 052704 (2014)]. However, there is only one shape parameter, i.e., the hohlraum to capsule radius ratio, being optimized. In this paper, we build the hohlraum with octahedral 6LEHs with a unified free-form representation, in which, by varying additional shape parameters: (1) available hohlraum shapes can be uniformly and accurately represented, (2) it can be used to understand why the spherical hohlraum has higher flux symmetry, (3) it allows us to obtain a feasible shape design field satisfying flux symmetry constraints, and (4) a synthetically optimized hohlraum can be obtained with a tradeoff of flux symmetry and other hohlraum performance. Finally, the hohlraum with octahedral 6LEHs is modeled, analyzed, and then optimized based on the unified free-form representation. The results show that a feasible shape design field with flux asymmetry no more than 1% can be obtained, and over the feasible design field, the spherical hohlraum is validated to have the highest flux symmetry, and a synthetically optimal hohlraum can be found with closing flux symmetry but larger volume between laser spots and centrally located capsule.

  4. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    NASA Astrophysics Data System (ADS)

    Mardirossian, Narbe; Head-Gordon, Martin

    2015-02-01

    A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 1010 choices carved out of a functional space of almost 1040 possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.

  5. The Six Principles of Facilities Stewardship

    ERIC Educational Resources Information Center

    Kaiser, Harvey H.; Klein, Eva

    2010-01-01

    Facilities stewardship means high-level and pervasive commitment to optimize capital investments, in order to achieve a high-functioning and attractive campus. It includes a major commitment to capital asset preservation and quality. Stewardship is about the long view of an institution's past and future. It ultimately forms the backdrop for…

  6. The Impact of Family Functioning and School Connectedness on Preadolescent Sense of Mastery

    ERIC Educational Resources Information Center

    Murphy, Emma L.; McKenzie, Vicki L.

    2016-01-01

    Families and schools are important environments that contribute to the resilience and positive development of preadolescent children. Sense of mastery, including its two central factors of optimism and self-efficacy, forms an important component of resilience during preadolescence (Prince-Embury, 2007). This study examined the interrelationships…

  7. Dividing Attention Between Tasks: Testing Whether Explicit Payoff Functions Elicit Optimal Dual-Task Performance.

    PubMed

    Farmer, George D; Janssen, Christian P; Nguyen, Anh T; Brumby, Duncan P

    2018-04-01

    We test people's ability to optimize performance across two concurrent tasks. Participants performed a number entry task while controlling a randomly moving cursor with a joystick. Participants received explicit feedback on their performance on these tasks in the form of a single combined score. This payoff function was varied between conditions to change the value of one task relative to the other. We found that participants adapted their strategy for interleaving the two tasks, by varying how long they spent on one task before switching to the other, in order to achieve the near maximum payoff available in each condition. In a second experiment, we show that this behavior is learned quickly (within 2-3 min over several discrete trials) and remained stable for as long as the payoff function did not change. The results of this work show that people are adaptive and flexible in how they prioritize and allocate attention in a dual-task setting. However, it also demonstrates some of the limits regarding people's ability to optimize payoff functions. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  8. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  9. Studies on molecular structure and tautomerism of a vitamin B6 analog with density functional theory.

    PubMed

    Sahoo, Suban K; Sharma, Darshna; Bera, Rati Kanta

    2012-05-01

    This work presents a computational study on the molecular structure and tautomeric equilibria of a novel Schiff base L derived from pyridoxal (PL) and o-phenylenediamine by using the density functional method B3LYP with basis sets 6-31 G(d,p), 6-31++G(d,p), 6-311 G(d,p) and 6-311++G(d,p). The optimized geometrical parameters obtained by B3LYP/6-31 G(d,p) method showed the best agreement with the experimental values. Tautomeric stability study of L inferred that the enolimine form is more stable than its ketoenamine form in both gas phase and solution. However, protonation of the pyridoxal nitrogen atom (LH) have accelerated the formation of ketoenamine form, and therefore, both ketoenamine and enolimine forms could be present in acidic media.

  10. Optimal exploration systems

    NASA Astrophysics Data System (ADS)

    Klesh, Andrew T.

    This dissertation studies optimal exploration, defined as the collection of information about given objects of interest by a mobile agent (the explorer) using imperfect sensors. The key aspects of exploration are kinematics (which determine how the explorer moves in response to steering commands), energetics (which determine how much energy is consumed by motion and maneuvers), informatics (which determine the rate at which information is collected) and estimation (which determines the states of the objects). These aspects are coupled by the steering decisions of the explorer. We seek to improve exploration by finding trade-offs amongst these couplings and the components of exploration: the Mission, the Path and the Agent. A comprehensive model of exploration is presented that, on one hand, accounts for these couplings and on the other hand is simple enough to allow analysis. This model is utilized to pose and solve several exploration problems where an objective function is to be minimized. Specific functions to be considered are the mission duration and the total energy. These exploration problems are formulated as optimal control problems and necessary conditions for optimality are obtained in the form of two-point boundary value problems. An analysis of these problems reveals characteristics of optimal exploration paths. Several regimes are identified for the optimal paths including the Watchtower, Solar and Drag regime, and several non-dimensional parameters are derived that determine the appropriate regime of travel. The so-called Power Ratio is shown to predict the qualitative features of the optimal paths, provide a metric to evaluate an aircrafts design and determine an aircrafts capability for flying perpetually. Optimal exploration system drivers are identified that provide perspective as to the importance of these various regimes of flight. A bank-to-turn solar-powered aircraft flying at constant altitude on Mars is used as a specific platform for analysis using the coupled model. Flight-paths found with this platform are presented that display the optimal exploration problem characteristics. These characteristics are used to form heuristics, such as a Generalized Traveling Salesman Problem solver, to simplify the exploration problem. These heuristics are used to empirically show the successful completion of an exploration mission by a physical explorer.

  11. A Perspective on the Clinical Translation of Scaffolds for Tissue Engineering

    PubMed Central

    Webber, Matthew J.; Khan, Omar F.; Sydlik, Stefanie A.; Tang, Benjamin C.; Langer, Robert

    2016-01-01

    Scaffolds have been broadly applied within tissue engineering and regenerative medicine to regenerate, replace, or augment diseased or damaged tissue. For a scaffold to perform optimally, several design considerations must be addressed, with an eye toward the eventual form, function, and tissue site. The chemical and mechanical properties of the scaffold must be tuned to optimize the interaction with cells and surrounding tissues. For complex tissue engineering, mass transport limitations, vascularization, and host tissue integration are important considerations. As the tissue architecture to be replaced becomes more complex and hierarchical, scaffold design must also match this complexity to recapitulate a functioning tissue. We outline these design constraints and highlight creative and emerging strategies to overcome limitations and modulate scaffold properties for optimal regeneration. We also highlight some of the most advanced strategies that have seen clinical application and discuss the hurdles that must be overcome for clinical use and commercialization of tissue engineering technologies. Finally, we provide a perspective on the future of scaffolds as a functional contributor to advancing tissue engineering and regenerative medicine. PMID:25201605

  12. A perspective on the clinical translation of scaffolds for tissue engineering.

    PubMed

    Webber, Matthew J; Khan, Omar F; Sydlik, Stefanie A; Tang, Benjamin C; Langer, Robert

    2015-03-01

    Scaffolds have been broadly applied within tissue engineering and regenerative medicine to regenerate, replace, or augment diseased or damaged tissue. For a scaffold to perform optimally, several design considerations must be addressed, with an eye toward the eventual form, function, and tissue site. The chemical and mechanical properties of the scaffold must be tuned to optimize the interaction with cells and surrounding tissues. For complex tissue engineering, mass transport limitations, vascularization, and host tissue integration are important considerations. As the tissue architecture to be replaced becomes more complex and hierarchical, scaffold design must also match this complexity to recapitulate a functioning tissue. We outline these design constraints and highlight creative and emerging strategies to overcome limitations and modulate scaffold properties for optimal regeneration. We also highlight some of the most advanced strategies that have seen clinical application and discuss the hurdles that must be overcome for clinical use and commercialization of tissue engineering technologies. Finally, we provide a perspective on the future of scaffolds as a functional contributor to advancing tissue engineering and regenerative medicine.

  13. [Baseline characteristics and changes in treatment after a period of optimization of the patients included in the study EFICAR].

    PubMed

    Gómez-Marcos, Manuel A; Agudo-Conde, Cristina; Torcal, Jesús; Echevarria, Pilar; Domingo, Mar; Arietaleanizbeascoa, María; Sanz-Guinea, Aitor; de la Torre, Maria M; Ramírez, Jose I; García-Ortiz, Luis

    2016-03-01

    To describe the baseline date and drugs therapy changes during treatment optimization in patients with heart failure with depressed systolic function included in the EFICAR study. Multicenter randomized clinical trial. Seven Health Centers. 150 patients (ICFSD) age 68±10 years, 77% male. Sociodemographic variables, comorbidities (Charlson index), functional capacity and quality of life. Drug therapy optimization was performed. The main etiology was ischemic heart disease (45%), with 89% in functional class II. The Charlson index was 2.03±1.05. The ejection fraction mean was 37%±8, 19% with ejection fraction <30%. With the stress test 6.3±1.6 mean was reached, with the 6 minutes test 446±78 meters and the chair test 13.7±4.4 seconds. The overall quality of life with ejection fraction was 22.8±18.7 and with the Short Form-36 Health Survey, physical health 43.3±8.4 and mental health 50.1±10.6. After optimizing the treatment, the percentage of patients on drugs therapy and the dose of angiotensin converting enzyme inhibitors, angiotensin II receptor antagonists and beta-blockers were not changed. The majority of the subjects are in functional class II, with functional capacity and quality of life decreased and comorbidity index high. A protocolized drug therapy adjustment did not increase the dose or number of patients with effective drugs for heart failure with depressed systolic function. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  14. Quantitative NMR Approach to Optimize the Formation of Chemical Building Blocks from Abundant Carbohydrates.

    PubMed

    Elliot, Samuel G; Tolborg, Søren; Sádaba, Irantzu; Taarning, Esben; Meier, Sebastian

    2017-07-21

    The future role of biomass-derived chemicals relies on the formation of diverse functional monomers in high yields from carbohydrates. Recently, it has become clear that a series of α-hydroxy acids, esters, and lactones can be formed from carbohydrates in alcohol and water solvents using tin-containing catalysts such as Sn-Beta. These compounds are potential building blocks for polyesters bearing additional olefin and alcohol functionalities. An NMR approach was used to identify, quantify, and optimize the formation of these building blocks in the Sn-Beta-catalyzed transformation of abundant carbohydrates. Record yields of the target molecules can be achieved by obstructing competing reactions through solvent selection. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Optimal Control-Based Adaptive NN Design for a Class of Nonlinear Discrete-Time Block-Triangular Systems.

    PubMed

    Liu, Yan-Jun; Tong, Shaocheng

    2016-11-01

    In this paper, we propose an optimal control scheme-based adaptive neural network design for a class of unknown nonlinear discrete-time systems. The controlled systems are in a block-triangular multi-input-multi-output pure-feedback structure, i.e., there are both state and input couplings and nonaffine functions to be included in every equation of each subsystem. The design objective is to provide a control scheme, which not only guarantees the stability of the systems, but also achieves optimal control performance. The main contribution of this paper is that it is for the first time to achieve the optimal performance for such a class of systems. Owing to the interactions among subsystems, making an optimal control signal is a difficult task. The design ideas are that: 1) the systems are transformed into an output predictor form; 2) for the output predictor, the ideal control signal and the strategic utility function can be approximated by using an action network and a critic network, respectively; and 3) an optimal control signal is constructed with the weight update rules to be designed based on a gradient descent method. The stability of the systems can be proved based on the difference Lyapunov method. Finally, a numerical simulation is given to illustrate the performance of the proposed scheme.

  16. Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors

    NASA Astrophysics Data System (ADS)

    Tun, Min Thaw; Sakaguchi, Daisaku

    2016-06-01

    High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.

  17. Concurrently examining unrealistic absolute and comparative optimism: Temporal shifts, individual-difference and event-specific correlates, and behavioural outcomes.

    PubMed

    Ruthig, Joelle C; Gamblin, Bradlee W; Jones, Kelly; Vanderzanden, Karen; Kehn, Andre

    2017-02-01

    Researchers have spent considerable effort examining unrealistic absolute optimism and unrealistic comparative optimism, yet there is a lack of research exploring them concurrently. This longitudinal study repeatedly assessed unrealistic absolute and comparative optimism within a performance context over several months to identify the degree to which they shift as a function of proximity to performance and performance feedback, their associations with global individual difference and event-specific factors, and their link to subsequent behavioural outcomes. Results showed similar shifts in unrealistic absolute and comparative optimism based on proximity to performance and performance feedback. Moreover, increases in both types of unrealistic optimism were associated with better subsequent performance beyond the effect of prior performance. However, several differences were found between the two forms of unrealistic optimism in their associations with global individual difference factors and event-specific factors, highlighting the distinctiveness of the two constructs. © 2016 The British Psychological Society.

  18. Integrated design and manufacturing for the high speed civil transport (a combined aerodynamics/propulsion optimization study)

    NASA Technical Reports Server (NTRS)

    Baecher, Juergen; Bandte, Oliver; DeLaurentis, Dan; Lewis, Kemper; Sicilia, Jose; Soboleski, Craig

    1995-01-01

    This report documents the efforts of a Georgia Tech High Speed Civil Transport (HSCT) aerospace student design team in completing a design methodology demonstration under NASA's Advanced Design Program (ADP). Aerodynamic and propulsion analyses are integrated into the synthesis code FLOPS in order to improve its prediction accuracy. Executing the integrated product and process development (IPPD) methodology proposed at the Aerospace Systems Design Laboratory (ASDL), an improved sizing process is described followed by a combined aero-propulsion optimization, where the objective function, average yield per revenue passenger mile ($/RPM), is constrained by flight stability, noise, approach speed, and field length restrictions. Primary goals include successful demonstration of the application of the response surface methodolgy (RSM) to parameter design, introduction to higher fidelity disciplinary analysis than normally feasible at the conceptual and early preliminary level, and investigations of relationships between aerodynamic and propulsion design parameters and their effect on the objective function, $/RPM. A unique approach to aircraft synthesis is developed in which statistical methods, specifically design of experiments and the RSM, are used to more efficiently search the design space for optimum configurations. In particular, two uses of these techniques are demonstrated. First, response model equations are formed which represent complex analysis in the form of a regression polynomial. Next, a second regression equation is constructed, not for modeling purposes, but instead for the purpose of optimization at the system level. Such an optimization problem with the given tools normally would be difficult due to the need for hard connections between the various complex codes involved. The statistical methodology presents an alternative and is demonstrated via an example of aerodynamic modeling and planform optimization for a HSCT.

  19. Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.

    PubMed

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang

    2010-07-01

    Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.

  20. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  1. Vibrational self-consistent field theory using optimized curvilinear coordinates.

    PubMed

    Bulik, Ireneusz W; Frisch, Michael J; Vaccaro, Patrick H

    2017-07-28

    A vibrational SCF model is presented in which the functions forming the single-mode functions in the product wavefunction are expressed in terms of internal coordinates and the coordinates used for each mode are optimized variationally. This model involves no approximations to the kinetic energy operator and does not require a Taylor-series expansion of the potential. The non-linear optimization of coordinates is found to give much better product wavefunctions than the limited variations considered in most previous applications of SCF methods to vibrational problems. The approach is tested using published potential energy surfaces for water, ammonia, and formaldehyde. Variational flexibility allowed in the current ansätze results in excellent zero-point energies expressed through single-product states and accurate fundamental transition frequencies realized by short configuration-interaction expansions. Fully variational optimization of single-product states for excited vibrational levels also is discussed. The highlighted methodology constitutes an excellent starting point for more sophisticated treatments, as the bulk characteristics of many-mode coupling are accounted for efficiently in terms of compact wavefunctions (as evident from the accurate prediction of transition frequencies).

  2. Robust optimization of supersonic ORC nozzle guide vanes

    NASA Astrophysics Data System (ADS)

    Bufi, Elio A.; Cinnella, Paola

    2017-03-01

    An efficient Robust Optimization (RO) strategy is developed for the design of 2D supersonic Organic Rankine Cycle turbine expanders. The dense gas effects are not-negligible for this application and they are taken into account describing the thermodynamics by means of the Peng-Robinson-Stryjek-Vera equation of state. The design methodology combines an Uncertainty Quantification (UQ) loop based on a Bayesian kriging model of the system response to the uncertain parameters, used to approximate statistics (mean and variance) of the uncertain system output, a CFD solver, and a multi-objective non-dominated sorting algorithm (NSGA), also based on a Kriging surrogate of the multi-objective fitness function, along with an adaptive infill strategy for surrogate enrichment at each generation of the NSGA. The objective functions are the average and variance of the isentropic efficiency. The blade shape is parametrized by means of a Free Form Deformation (FFD) approach. The robust optimal blades are compared to the baseline design (based on the Method of Characteristics) and to a blade obtained by means of a deterministic CFD-based optimization.

  3. Optimal Mortgage Refinancing: A Closed Form Solution

    PubMed Central

    Agarwal, Sumit; Driscoll, John C.; Laibson, David I.

    2013-01-01

    We derive the first closed-form optimal refinancing rule: Refinance when the current mortgage interest rate falls below the original rate by at least 1ψ[ϕ+W(−exp(−ϕ))]. In this formula W(.) is the Lambert W-function, ψ=2(ρ+λ)σ,ϕ=1+ψ(ρ+λ)κ∕M(1−τ), ρ is the real discount rate, λ is the expected real rate of exogenous mortgage repayment, σ is the standard deviation of the mortgage rate, κ/M is the ratio of the tax-adjusted refinancing cost and the remaining mortgage value, and τ is the marginal tax rate. This expression is derived by solving a tractable class of refinancing problems. Our quantitative results closely match those reported by researchers using numerical methods. PMID:25843977

  4. Robust optimal control of material flows in demand-driven supply networks

    NASA Astrophysics Data System (ADS)

    Laumanns, Marco; Lefeber, Erjen

    2006-04-01

    We develop a model based on stochastic discrete-time controlled dynamical systems in order to derive optimal policies for controlling the material flow in supply networks. Each node in the network is described as a transducer such that the dynamics of the material and information flows within the entire network can be expressed by a system of first-order difference equations, where some inputs to the system act as external disturbances. We apply methods from constrained robust optimal control to compute the explicit control law as a function of the current state. For the numerical examples considered, these control laws correspond to certain classes of optimal ordering policies from inventory management while avoiding, however, any a priori assumptions about the general form of the policy.

  5. An Adaptive Niching Genetic Algorithm using a niche size equalization mechanism

    NASA Astrophysics Data System (ADS)

    Nagata, Yuichi

    Niching GAs have been widely investigated to apply genetic algorithms (GAs) to multimodal function optimization problems. In this paper, we suggest a new niching GA that attempts to form niches, each consisting of an equal number of individuals. The proposed GA can be applied also to combinatorial optimization problems by defining a distance metric in the search space. We apply the proposed GA to the job-shop scheduling problem (JSP) and demonstrate that the proposed niching method enhances the ability to maintain niches and improve the performance of GAs.

  6. How the Public Engages With Brain Optimization

    PubMed Central

    O’Connor, Cliodhna

    2015-01-01

    In the burgeoning debate about neuroscience’s role in contemporary society, the issue of brain optimization, or the application of neuroscientific knowledge and technologies to augment neurocognitive function, has taken center stage. Previous research has characterized media discourse on brain optimization as individualistic in ethos, pressuring individuals to expend calculated effort in cultivating culturally desirable forms of selves and bodies. However, little research has investigated whether the themes that characterize media dialogue are shared by lay populations. This article considers the relationship between the representations of brain optimization that surfaced in (i) a study of British press coverage between 2000 and 2012 and (ii) interviews with forty-eight London residents. Both data sets represented the brain as a resource that could be manipulated by the individual, with optimal brain function contingent on applying self-control in one’s lifestyle choices. However, these ideas emerged more sharply in the media than in the interviews: while most interviewees were aware of brain optimization practices, few were committed to carrying them out. The two data sets diverged in several ways: the media’s intense preoccupation with optimizing children’s brains was not apparent in lay dialogue, while interviewees elaborated beliefs about the underuse of brain tissue that showed no presence in the media. This article considers these continuities and discontinuities in light of their wider cultural significance and their implications for the media–mind relationship in public engagement with neuroscience. PMID:26336326

  7. Prepositioning emergency supplies under uncertainty: a parametric optimization method

    NASA Astrophysics Data System (ADS)

    Bai, Xuejie; Gao, Jinwu; Liu, Yankui

    2018-07-01

    Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.

  8. Application of Contraction Mappings to the Control of Nonlinear Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Killingsworth, W. R., Jr.

    1972-01-01

    The theoretical and applied aspects of successive approximation techniques are considered for the determination of controls for nonlinear dynamical systems. Particular emphasis is placed upon the methods of contraction mappings and modified contraction mappings. It is shown that application of the Pontryagin principle to the optimal nonlinear regulator problem results in necessary conditions for optimality in the form of a two point boundary value problem (TPBVP). The TPBVP is represented by an operator equation and functional analytic results on the iterative solution of operator equations are applied. The general convergence theorems are translated and applied to those operators arising from the optimal regulation of nonlinear systems. It is shown that simply structured matrices and similarity transformations may be used to facilitate the calculation of the matrix Green functions and the evaluation of the convergence criteria. A controllability theory based on the integral representation of TPBVP's, the implicit function theorem, and contraction mappings is developed for nonlinear dynamical systems. Contraction mappings are theoretically and practically applied to a nonlinear control problem with bounded input control and the Lipschitz norm is used to prove convergence for the nondifferentiable operator. A dynamic model representing community drug usage is developed and the contraction mappings method is used to study the optimal regulation of the nonlinear system.

  9. An Empirical Mass Function Distribution

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Robotham, A. S. G.; Power, C.

    2018-03-01

    The halo mass function, encoding the comoving number density of dark matter halos of a given mass, plays a key role in understanding the formation and evolution of galaxies. As such, it is a key goal of current and future deep optical surveys to constrain the mass function down to mass scales that typically host {L}\\star galaxies. Motivated by the proven accuracy of Press–Schechter-type mass functions, we introduce a related but purely empirical form consistent with standard formulae to better than 4% in the medium-mass regime, {10}10{--}{10}13 {h}-1 {M}ȯ . In particular, our form consists of four parameters, each of which has a simple interpretation, and can be directly related to parameters of the galaxy distribution, such as {L}\\star . Using this form within a hierarchical Bayesian likelihood model, we show how individual mass-measurement errors can be successfully included in a typical analysis, while accounting for Eddington bias. We apply our form to a question of survey design in the context of a semi-realistic data model, illustrating how it can be used to obtain optimal balance between survey depth and angular coverage for constraints on mass function parameters. Open-source Python and R codes to apply our new form are provided at http://mrpy.readthedocs.org and https://cran.r-project.org/web/packages/tggd/index.html respectively.

  10. Diagnosis and treatment of osteoarthritis.

    PubMed

    Taruc-Uy, Rafaelani L; Lynch, Scott A

    2013-12-01

    Osteoarthritis presents in primary and secondary forms. The primary, or idiopathic, form occurs in previously intact joints without any inciting agent, whereas the secondary form is caused by underlying predisposing factors (eg, trauma). The diagnosis of osteoarthritis is primarily based on thorough history and physical examination findings, with or without radiographic evidence. Although some patients may be asymptomatic initially, the most common symptom is pain. Treatment options are generally classified as pharmacologic, nonpharmacologic, surgical, and complementary and/or alternative, typically used in combination to achieve optimal results. The goals of treatment are alleviation of symptoms and improvement in functional status. Published by Elsevier Inc.

  11. A new approach to impulsive rendezvous near circular orbit

    NASA Astrophysics Data System (ADS)

    Carter, Thomas; Humi, Mayer

    2012-04-01

    A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.

  12. Globally optimal grouping for symmetric closed boundaries by combining boundary and region information.

    PubMed

    Stahl, Joachim S; Wang, Song

    2008-03-01

    Many natural and man-made structures have a boundary that shows a certain level of bilateral symmetry, a property that plays an important role in both human and computer vision. In this paper, we present a new grouping method for detecting closed boundaries with symmetry. We first construct a new type of grouping token in the form of symmetric trapezoids by pairing line segments detected from the image. A closed boundary can then be achieved by connecting some trapezoids with a sequence of gap-filling quadrilaterals. For such a closed boundary, we define a unified grouping cost function in a ratio form: the numerator reflects the boundary information of proximity and symmetry and the denominator reflects the region information of the enclosed area. The introduction of the region-area information in the denominator is able to avoid a bias toward shorter boundaries. We then develop a new graph model to represent the grouping tokens. In this new graph model, the grouping cost function can be encoded by carefully designed edge weights and the desired optimal boundary corresponds to a special cycle with a minimum ratio-form cost. We finally show that such a cycle can be found in polynomial time using a previous graph algorithm. We implement this symmetry-grouping method and test it on a set of synthetic data and real images. The performance is compared to two previous grouping methods that do not consider symmetry in their grouping cost functions.

  13. Expression and purification of functional Clostridium perfringens alpha and epsilon toxins in Escherichia coli.

    PubMed

    Zhao, Yao; Kang, Lin; Gao, Shan; Zhou, Yang; Su, Libo; Xin, Wenwen; Su, Yuxin; Wang, Jinglin

    2011-06-01

    The alpha and epsilon toxins are 2 of the 4 major lethal toxins of the pathogen Clostridium perfringens. In this study, the expression of the epsilon toxin (etx) gene of C. perfringens was optimized by replacing rare codons with high-frequency codons, and the optimized gene was synthesized using overlapping PCR. Then, the etx gene or the alpha-toxin gene (cpa) was individually inserted into the pTIG-Trx expression vector with a hexahistidine tag and a thioredoxin (Trx) to facilitate their purification and induce the expression of soluble proteins. The recombinant alpha toxin (rCPA) and epsilon toxin (rETX) were highly expressed as soluble forms in the recipient Escherichia coli BL21 strain, respectively. The rCPA and rETX were purified using Ni(2+)-chelating chromatography and size-exclusion chromatography. And the entire purification process recovered about 40% of each target protein from the starting materials. The purified target toxins formed single band at about 42kDa (rCPA) or 31kDa (rETX) in sodium dodecyl sulfate-polyacrylamide gel electrophoresis, and their functional activity was confirmed by bioactivity assays. We have shown that the production of large amounts of soluble and functional proteins by using the pTIG-Trx vector in E. coli is a good alternative for the production of native alpha and epsilon toxins and could also be useful for the production of other toxic proteins with soluble forms. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Dielectric function for doped graphene layer with barium titanate

    NASA Astrophysics Data System (ADS)

    Martinez Ramos, Manuel; Garces Garcia, Eric; Magana, Fernado; Vazquez Fonseca, Gerardo Jorge

    2015-03-01

    The aim of our study is to calculate the dielectric function for a system formed with a graphene layer doped with barium titanate. Density functional theory, within the local density approximation, plane-waves and pseudopotentials scheme as implemented in Quantum Espresso suite of programs was used. We considered 128 carbon atoms with a barium titanate cluster of 11 molecules as unit cell with periodic conditions. The geometry optimization is achieved. Optimization of structural configuration is performed by relaxation of all atomic positions to minimize their total energies. Band structure, density of states and linear optical response (the imaginary part of dielectric tensor) were calculated. We thank Dirección General de Asuntos del Personal Académico de la Universidad Nacional Autónoma de México, partial financial support by Grant IN-106514 and we also thank Miztli Super-Computing center the technical assistance.

  15. Learning-Based Adaptive Optimal Tracking Control of Strict-Feedback Nonlinear Systems.

    PubMed

    Gao, Weinan; Jiang, Zhong-Ping; Weinan Gao; Zhong-Ping Jiang; Gao, Weinan; Jiang, Zhong-Ping

    2018-06-01

    This paper proposes a novel data-driven control approach to address the problem of adaptive optimal tracking for a class of nonlinear systems taking the strict-feedback form. Adaptive dynamic programming (ADP) and nonlinear output regulation theories are integrated for the first time to compute an adaptive near-optimal tracker without any a priori knowledge of the system dynamics. Fundamentally different from adaptive optimal stabilization problems, the solution to a Hamilton-Jacobi-Bellman (HJB) equation, not necessarily a positive definite function, cannot be approximated through the existing iterative methods. This paper proposes a novel policy iteration technique for solving positive semidefinite HJB equations with rigorous convergence analysis. A two-phase data-driven learning method is developed and implemented online by ADP. The efficacy of the proposed adaptive optimal tracking control methodology is demonstrated via a Van der Pol oscillator with time-varying exogenous signals.

  16. Statistical considerations in the development of injury risk functions.

    PubMed

    McMurry, Timothy L; Poplin, Gerald S

    2015-01-01

    We address 4 frequently misunderstood and important statistical ideas in the construction of injury risk functions. These include the similarities of survival analysis and logistic regression, the correct scale on which to construct pointwise confidence intervals for injury risk, the ability to discern which form of injury risk function is optimal, and the handling of repeated tests on the same subject. The statistical models are explored through simulation and examination of the underlying mathematics. We provide recommendations for the statistically valid construction and correct interpretation of single-predictor injury risk functions. This article aims to provide useful and understandable statistical guidance to improve the practice in constructing injury risk functions.

  17. Children's Narrative Accounts and Judgments of Their Own Peer-Exclusion Experiences

    ERIC Educational Resources Information Center

    Wainryb, Cecilia; Komolova, Masha; Brehl, Beverly

    2014-01-01

    Although exclusion is commonly thought of as a form of relational or social aggression, it often reflects attempts at maintaining friendships, drawing group boundaries, and optimizing group functioning and can thus also be considered an inevitable feature of normative social interactions. This study examines the narrative accounts and judgments of…

  18. Characterization and prediction of the backscattered form function of an immersed cylindrical shell using hybrid fuzzy clustering and bio-inspired algorithms.

    PubMed

    Agounad, Said; Aassif, El Houcein; Khandouch, Younes; Maze, Gérard; Décultot, Dominique

    2018-02-01

    The acoustic scattering of a plane wave by an elastic cylindrical shell is studied. A new approach is developed to predict the form function of an immersed cylindrical shell of the radius ratio b/a ('b' is the inner radius and 'a' is the outer radius). The prediction of the backscattered form function is investigated by a combined approach between fuzzy clustering algorithms and bio-inspired algorithms. Four famous fuzzy clustering algorithms: the fuzzy c-means (FCM), the Gustafson-Kessel algorithm (GK), the fuzzy c-regression model (FCRM) and the Gath-Geva algorithm (GG) are combined with particle swarm optimization and genetic algorithm. The symmetric and antisymmetric circumferential waves A, S 0 , A 1 , S 1 and S 2 are investigated in a reduced frequency (k 1 a) range extends over 0.1

  19. Connecting source aggregating areas with distributive regions via Optimal Transportation theory.

    NASA Astrophysics Data System (ADS)

    Lanzoni, S.; Putti, M.

    2016-12-01

    We study the application of Optimal Transport (OT) theory to the transfer of water and sediments from a distributed aggregating source to a distributing area connected by a erodible hillslope. Starting from the Monge-Kantorovich equations, We derive a global energy functional that nonlinearly combines the cost of constructing the drainage network over the entire domain and the cost of water and sediment transportation through the network. It can be shown that the minimization of this functional is equivalent to the infinite time solution of a system of diffusion partial differential equations coupled with transient ordinary differential equations, that closely resemble the classical conservation laws of water and sediments mass and momentum. We present several numerical simulations applied to realstic test cases. For example, the solution of the proposed model forms network configurations that share strong similiratities with rill channels formed on an hillslope. At a larger scale, we obtain promising results in simulating the network patterns that ensure a progressive and continuous transition from a drainage drainage area to a distributive receiving region.

  20. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720

    2015-02-21

    A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 10{sup 10} choices carved out of a functional space of almost 10{sup 40} possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based onmore » a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less

  1. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    DOE PAGES

    Mardirossian, Narbe; Head-Gordon, Martin

    2015-02-20

    We present a meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional. The functional form is selected from more than 10 10 choices carved out of a functional space of almost 10 40 possibilities. This raw data comes from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filteredmore » based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less

  2. Aerodynamic shape optimization using control theory

    NASA Technical Reports Server (NTRS)

    Reuther, James

    1996-01-01

    Aerodynamic shape design has long persisted as a difficult scientific challenge due its highly nonlinear flow physics and daunting geometric complexity. However, with the emergence of Computational Fluid Dynamics (CFD) it has become possible to make accurate predictions of flows which are not dominated by viscous effects. It is thus worthwhile to explore the extension of CFD methods for flow analysis to the treatment of aerodynamic shape design. Two new aerodynamic shape design methods are developed which combine existing CFD technology, optimal control theory, and numerical optimization techniques. Flow analysis methods for the potential flow equation and the Euler equations form the basis of the two respective design methods. In each case, optimal control theory is used to derive the adjoint differential equations, the solution of which provides the necessary gradient information to a numerical optimization method much more efficiently then by conventional finite differencing. Each technique uses a quasi-Newton numerical optimization algorithm to drive an aerodynamic objective function toward a minimum. An analytic grid perturbation method is developed to modify body fitted meshes to accommodate shape changes during the design process. Both Hicks-Henne perturbation functions and B-spline control points are explored as suitable design variables. The new methods prove to be computationally efficient and robust, and can be used for practical airfoil design including geometric and aerodynamic constraints. Objective functions are chosen to allow both inverse design to a target pressure distribution and wave drag minimization. Several design cases are presented for each method illustrating its practicality and efficiency. These include non-lifting and lifting airfoils operating at both subsonic and transonic conditions.

  3. A finite element based method for solution of optimal control problems

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.; Calise, Anthony J.

    1989-01-01

    A temporal finite element based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables that are expanded in terms of elemental values and simple shape functions. Unlike other variational approaches to optimal control problems, however, time derivatives of the states and costates do not appear in the governing variational equation. Instead, the only quantities whose time derivatives appear therein are virtual states and virtual costates. Also noteworthy among characteristics of the finite element formulation is the fact that in the algebraic equations which contain costates, they appear linearly. Thus, the remaining equations can be solved iteratively without initial guesses for the costates; this reduces the size of the problem by about a factor of two. Numerical results are presented herein for an elementary trajectory optimization problem which show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The goal is to evaluate the feasibility of this approach for real-time guidance applications. To this end, a simplified two-stage, four-state model for an advanced launch vehicle application is presented which is suitable for finite element solution.

  4. Unconventional bearing capacity analysis and optimization of multicell box girders.

    PubMed

    Tepic, Jovan; Doroslovacki, Rade; Djelosevic, Mirko

    2014-01-01

    This study deals with unconventional bearing capacity analysis and the procedure of optimizing a two-cell box girder. The generalized model which enables the local stress-strain analysis of multicell girders was developed based on the principle of cross-sectional decomposition. The applied methodology is verified using the experimental data (Djelosevic et al., 2012) for traditionally formed box girders. The qualitative and quantitative evaluation of results obtained for the two-cell box girder is realized based on comparative analysis using the finite element method (FEM) and the ANSYS v12 software. The deflection function obtained by analytical and numerical methods was found consistent provided that the maximum deviation does not exceed 4%. Multicell box girders are rationally designed support structures characterized by much lower susceptibility of their cross-sectional elements to buckling and higher specific capacity than traditionally formed box girders. The developed local stress model is applied for optimizing the cross section of a two-cell box carrier. The author points to the advantages of implementing the model of local stresses in the optimization process and concludes that the technological reserve of bearing capacity amounts to 20% at the same girder weight and constant load conditions.

  5. SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, X; Duan, J; Popple, R

    2014-06-01

    Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less

  6. How we value the future affects our desire to learn.

    PubMed

    Moore, Alana L; Hauser, Cindy E; McCarthy, Michael A

    2008-06-01

    Active adaptive management is increasingly advocated in natural resource management and conservation biology. Active adaptive management looks at the benefit of employing strategies that may be suboptimal in the near term but which may provide additional information that will facilitate better management in future years. However, when comparing management policies it is traditional to weigh future rewards geometrically (at a constant discount rate) which results in far-distant rewards making a negligible contribution to the total benefit. Under such a discounting scheme active adaptive management is rarely of much benefit, especially if learning is slow. A growing number of authors advocate the use of alternative forms of discounting when evaluating optimal strategies for long-term decisions which have a social component. We consider a theoretical harvested population for which the recovery rate from an unharvestably small population size is unknown and look at the effects on the benefit of experimental management when three different forms of discounting are employed. Under geometric discounting, with a discount rate of 5% per annum, managing to learn actively had little benefit. This study demonstrates that discount functions which weigh future rewards more heavily result in more conservative harvesting strategies, but do not necessarily encourage active learning. Furthermore, the optimal management strategy is not equivalent to employing geometric discounting at a lower rate. If alternative discount functions are made mandatory in calculating optimal management strategies for environmental management then this will affect the structure of optimal management regimes and change when and how much we are willing to invest in learning.

  7. Exercise and nutritional approaches to prevent frail bones, falls and fractures: an update.

    PubMed

    Daly, R M

    2017-04-01

    Osteoporosis (low bone strength) and sarcopenia (low muscle mass, strength and/or impaired function) often co-exist (hence the term 'sarco-osteoporosis') and have similar health consequences with regard to disability, falls, frailty and fractures. Exercise and adequate nutrition, particularly with regard to vitamin D, calcium and protein, are key lifestyle approaches that can simultaneously optimize bone, muscle and functional outcomes in older people, if they are individually tailored and appropriately prescribed in terms of the type and dose. Not all forms of exercise are equally effective for optimizing musculoskeletal health. Regular walking alone has little or no effect on bone or muscle. Traditional progressive resistance training (PRT) is effective for improving muscle mass, size and strength, but it has mixed effects on muscle function and falls which may be due to the common prescription of slow and controlled movement patterns. At present, targeted multi-modal programs incorporating traditional and high-velocity PRT, weight-bearing impact exercises and challenging balance/mobility activities appear to be most effective for optimizing musculoskeletal health and function. Reducing and breaking up sitting time may also help attenuate muscle loss. There is also evidence to support an interaction between exercise and various nutritional factors, particularly protein and some multi-nutrient supplements, on muscle and bone health in the elderly. This review summary provides an overview of the latest evidence with regard to the optimal type and dose of exercise and the role of various nutritional factors for preventing bone and muscle loss and improving functional capacity in older people.

  8. Optimizing Insulin Glargine Plus One Injection of Insulin Glulisine in Type 2 Diabetes in the ELEONOR Study

    PubMed Central

    Nicolucci, Antonio; Del Prato, Stefano; Vespasiani, Giacomo

    2011-01-01

    OBJECTIVE To determine the functional health status and treatment satisfaction in patients with type 2 diabetes from the Evaluation of Lantus Effect ON Optimization of use of single dose Rapid insulin (ELEONOR) study that investigated whether a telecare program helps optimization of basal insulin glargine with one bolus injection of insulin glulisine. RESEARCH DESIGN AND METHODS Functional health status and treatment satisfaction were investigated using the 36-Item Short-Form (SF-36) Health Survey, the World Health Organization Well-Being Questionnaire (WBQ), and the Diabetes Treatment Satisfaction Questionnaire. RESULTS Of 291 randomized patients, 238 completed the study (telecare: 114; self-monitoring blood glucose: 124). Significant improvements were detected in most SF-36 domains, in WBQ depression and anxiety scores, and in treatment satisfaction, without differences between study groups. CONCLUSIONS An insulin regimen that substantially improves metabolic control, while minimizing the risk of hypoglycemia, can positively affect physical and psychologic well-being and treatment satisfaction irrespective of the educational support system used. PMID:21953799

  9. A method for determining optimum phasing of a multiphase propulsion system for a single-stage vehicle with linearized inert weight

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1974-01-01

    A general analytical treatment is presented of a single-stage vehicle with multiple propulsion phases. A closed-form solution for the cost and for the performance and a derivation of the optimal phasing of the propulsion are included. Linearized variations in the inert weight elements are included, and the function to be minimized can be selected. The derivation of optimal phasing results in a set of nonlinear algebraic equations for optimal fuel volumes, for which a solution method is outlined. Three specific example cases are analyzed: minimum gross lift-off weight, minimum inert weight, and a minimized general function for a two-phase vehicle. The results for the two-phase vehicle are applied to the dual-fuel rocket. Comparisons with single-fuel vehicles indicate that dual-fuel vehicles can have lower inert weight either by development of a dual-fuel engine or by parallel burning of separate engines from lift-off.

  10. Convexity Conditions and the Legendre-Fenchel Transform for the Product of Finitely Many Positive Definite Quadratic Forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Yunbin, E-mail: zhaoyy@maths.bham.ac.u

    2010-12-15

    While the product of finitely many convex functions has been investigated in the field of global optimization, some fundamental issues such as the convexity condition and the Legendre-Fenchel transform for the product function remain unresolved. Focusing on quadratic forms, this paper is aimed at addressing the question: When is the product of finitely many positive definite quadratic forms convex, and what is the Legendre-Fenchel transform for it? First, we show that the convexity of the product is determined intrinsically by the condition number of so-called 'scaled matrices' associated with quadratic forms involved. The main result claims that if the conditionmore » number of these scaled matrices are bounded above by an explicit constant (which depends only on the number of quadratic forms involved), then the product function is convex. Second, we prove that the Legendre-Fenchel transform for the product of positive definite quadratic forms can be expressed, and the computation of the transform amounts to finding the solution to a system of equations (or equally, finding a Brouwer's fixed point of a mapping) with a special structure. Thus, a broader question than the open 'Question 11' in Hiriart-Urruty (SIAM Rev. 49, 225-273, 2007) is addressed in this paper.« less

  11. Functional annotation by sequence-weighted structure alignments: statistical analysis and case studies from the Protein 3000 structural genomics project in Japan.

    PubMed

    Standley, Daron M; Toh, Hiroyuki; Nakamura, Haruki

    2008-09-01

    A method to functionally annotate structural genomics targets, based on a novel structural alignment scoring function, is proposed. In the proposed score, position-specific scoring matrices are used to weight structurally aligned residue pairs to highlight evolutionarily conserved motifs. The functional form of the score is first optimized for discriminating domains belonging to the same Pfam family from domains belonging to different families but the same CATH or SCOP superfamily. In the optimization stage, we consider four standard weighting functions as well as our own, the "maximum substitution probability," and combinations of these functions. The optimized score achieves an area of 0.87 under the receiver-operating characteristic curve with respect to identifying Pfam families within a sequence-unique benchmark set of domain pairs. Confidence measures are then derived from the benchmark distribution of true-positive scores. The alignment method is next applied to the task of functionally annotating 230 query proteins released to the public as part of the Protein 3000 structural genomics project in Japan. Of these queries, 78 were found to align to templates with the same Pfam family as the query or had sequence identities > or = 30%. Another 49 queries were found to match more distantly related templates. Within this group, the template predicted by our method to be the closest functional relative was often not the most structurally similar. Several nontrivial cases are discussed in detail. Finally, 103 queries matched templates at the fold level, but not the family or superfamily level, and remain functionally uncharacterized. 2008 Wiley-Liss, Inc.

  12. Basic properties of lattices of cubes, algorithms for their construction, and application capabilities in discrete optimization

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2015-01-01

    The basic properties of a new type of lattices—a lattice of cubes—are described. It is shown that, with a suitable choice of union and intersection operations, the set of all subcubes of an N-cube forms a lattice, which is called a lattice of cubes. Algorithms for constructing such lattices are described, and the results produced by these algorithms in the case of lattices of various dimensions are illustrated. It is proved that a lattice of cubes is a lattice with supplements, which makes it possible to minimize and maximize supermodular functions on it. Examples of such functions are given. The possibility of applying previously developed efficient optimization algorithms to the formulation and solution of new classes of problems on lattices of cubes.

  13. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  14. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2017-02-11

    This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  15. Grey fuzzy optimization model for water quality management of a river system

    NASA Astrophysics Data System (ADS)

    Karmakar, Subhankar; Mujumdar, P. P.

    2006-07-01

    A grey fuzzy optimization model is developed for water quality management of river system to address uncertainty involved in fixing the membership functions for different goals of Pollution Control Agency (PCA) and dischargers. The present model, Grey Fuzzy Waste Load Allocation Model (GFWLAM), has the capability to incorporate the conflicting goals of PCA and dischargers in a deterministic framework. The imprecision associated with specifying the water quality criteria and fractional removal levels are modeled in a fuzzy mathematical framework. To address the imprecision in fixing the lower and upper bounds of membership functions, the membership functions themselves are treated as fuzzy in the model and the membership parameters are expressed as interval grey numbers, a closed and bounded interval with known lower and upper bounds but unknown distribution information. The model provides flexibility for PCA and dischargers to specify their aspirations independently, as the membership parameters for different membership functions, specified for different imprecise goals are interval grey numbers in place of a deterministic real number. In the final solution optimal fractional removal levels of the pollutants are obtained in the form of interval grey numbers. This enhances the flexibility and applicability in decision-making, as the decision-maker gets a range of optimal solutions for fixing the final decision scheme considering technical and economic feasibility of the pollutant treatment levels. Application of the GFWLAM is illustrated with case study of the Tunga-Bhadra river system in India.

  16. Real-World Application of Robust Design Optimization Assisted by Response Surface Approximation and Visual Data-Mining

    NASA Astrophysics Data System (ADS)

    Shimoyama, Koji; Jeong, Shinkyu; Obayashi, Shigeru

    A new approach for multi-objective robust design optimization was proposed and applied to a real-world design problem with a large number of objective functions. The present approach is assisted by response surface approximation and visual data-mining, and resulted in two major gains regarding computational time and data interpretation. The Kriging model for response surface approximation can markedly reduce the computational time for predictions of robustness. In addition, the use of self-organizing maps as a data-mining technique allows visualization of complicated design information between optimality and robustness in a comprehensible two-dimensional form. Therefore, the extraction and interpretation of trade-off relations between optimality and robustness of design, and also the location of sweet spots in the design space, can be performed in a comprehensive manner.

  17. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  18. User-oriented design strategies for a Lunar base

    NASA Astrophysics Data System (ADS)

    Jukola, Paivi

    'Form follows function can be translated, among other, to communicate a desire to prioritize functional objectives for a particular design task. Thus it is less likely that a design program for a multi-functional habitat, for an all-purpose vehicle, or for a general community, will lead to most optimal, cost-effective and sustainable solutions. A power plant, a factory, a farm and a research center have over centuries had different logistical and functional requirements, despite of the local culture on various parts around the planet Earth. 'The same size fits all' concept is likely to lead to less user-friendly solutions. The paper proposes to rethink and to investigate alternative strategies to formulate objectives for a Lunar base. Diverse scientific experiments and potential future research programs for the Moon have a number of functional requirements that differ from each other. A crew of 4-6 may not be optimal for the most innovative research. The discussion is based on research of Human Factors and Design for visiting professor lectures for a Lunar base project with Howard University and NASA Marshall Space Center 2009-2010.

  19. Learning in engineered multi-agent systems

    NASA Astrophysics Data System (ADS)

    Menon, Anup

    Consider the problem of maximizing the total power produced by a wind farm. Due to aerodynamic interactions between wind turbines, each turbine maximizing its individual power---as is the case in present-day wind farms---does not lead to optimal farm-level power capture. Further, there are no good models to capture the said aerodynamic interactions, rendering model based optimization techniques ineffective. Thus, model-free distributed algorithms are needed that help turbines adapt their power production on-line so as to maximize farm-level power capture. Motivated by such problems, the main focus of this dissertation is a distributed model-free optimization problem in the context of multi-agent systems. The set-up comprises of a fixed number of agents, each of which can pick an action and observe the value of its individual utility function. An individual's utility function may depend on the collective action taken by all agents. The exact functional form (or model) of the agent utility functions, however, are unknown; an agent can only measure the numeric value of its utility. The objective of the multi-agent system is to optimize the welfare function (i.e. sum of the individual utility functions). Such a collaborative task requires communications between agents and we allow for the possibility of such inter-agent communications. We also pay attention to the role played by the pattern of such information exchange on certain aspects of performance. We develop two algorithms to solve this problem. The first one, engineered Interactive Trial and Error Learning (eITEL) algorithm, is based on a line of work in the Learning in Games literature and applies when agent actions are drawn from finite sets. While in a model-free setting, we introduce a novel qualitative graph-theoretic framework to encode known directed interactions of the form "which agents' action affect which others' payoff" (interaction graph). We encode explicit inter-agent communications in a directed graph (communication graph) and, under certain conditions, prove convergence of agent joint action (under eITEL) to the welfare optimizing set. The main condition requires that the union of interaction and communication graphs be strongly connected; thus the algorithm combines an implicit form of communication (via interactions through utility functions) with explicit inter-agent communications to achieve the given collaborative goal. This work has kinship with certain evolutionary computation techniques such as Simulated Annealing; the algorithm steps are carefully designed such that it describes an ergodic Markov chain with a stationary distribution that has support over states where agent joint actions optimize the welfare function. The main analysis tool is perturbed Markov chains and results of broader interest regarding these are derived as well. The other algorithm, Collaborative Extremum Seeking (CES), uses techniques from extremum seeking control to solve the problem when agent actions are drawn from the set of real numbers. In this case, under the assumption of existence of a local minimizer for the welfare function and a connected undirected communication graph between agents, a result regarding convergence of joint action to a small neighborhood of a local optimizer of the welfare function is proved. Since extremum seeking control uses a simultaneous gradient estimation-descent scheme, gradient information available in the continuous action space formulation is exploited by the CES algorithm to yield improved convergence speeds. The effectiveness of this algorithm for the wind farm power maximization problem is evaluated via simulations. Lastly, we turn to a different question regarding role of the information exchange pattern on performance of distributed control systems by means of a case study for the vehicle platooning problem. In the vehicle platoon control problem, the objective is to design distributed control laws for individual vehicles in a platoon (or a road-train) that regulate inter-vehicle distances at a specified safe value while the entire platoon follows a leader-vehicle. While most of the literature on the problem deals with some inadequacy in control performance when the information exchange is of the nearest neighbor-type, we consider an arbitrary graph serving as information exchange pattern and derive a relationship between how a certain indicator of control performance is related to the information pattern. Such analysis helps in understanding qualitative features of the `right' information pattern for this problem.

  20. Optimal exploitation strategies for an animal population in a stochastic serially correlated environment

    USGS Publications Warehouse

    Anderson, D.R.

    1974-01-01

    Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.

  1. Optimization of Systems with Uncertainty: Initial Developments for Performance, Robustness and Reliability Based Designs

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.

  2. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  3. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    NASA Astrophysics Data System (ADS)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  4. Optimal Experiment Design for Thermal Characterization of Functionally Graded Materials

    NASA Technical Reports Server (NTRS)

    Cole, Kevin D.

    2003-01-01

    The purpose of the project was to investigate methods to accurately verify that designed , materials meet thermal specifications. The project involved heat transfer calculations and optimization studies, and no laboratory experiments were performed. One part of the research involved study of materials in which conduction heat transfer predominates. Results include techniques to choose among several experimental designs, and protocols for determining the optimum experimental conditions for determination of thermal properties. Metal foam materials were also studied in which both conduction and radiation heat transfer are present. Results of this work include procedures to optimize the design of experiments to accurately measure both conductive and radiative thermal properties. Detailed results in the form of three journal papers have been appended to this report.

  5. The Need to Trust and to Trust More Wisely in Academe

    ERIC Educational Resources Information Center

    Bowman, Richard F.

    2012-01-01

    Where trust is an issue, there is no trust. Trust in diverse organizations has never been lower. A shadow of doubt stalks one's every decision to trust collegially and institutionally. Still, colleagues sense intuitively that institutions cannot function optimally without a bedrock level of trust. In academic life, trust is a form of social…

  6. The optimal SAM surface functional group for producing a biomimetic HA coating on Ti.

    PubMed

    Liu, D P; Majewski, P; O'Neill, B K; Ngothai, Y; Colby, C B

    2006-06-15

    Commercial interest is growing in biomimetic methods that employ self assembled mono-layers (SAMs) to produce biocompatible HA coatings on Ti-based orthopedic implants. Recently, separate studies have considered HA formation for various SAM surface functional groups. However, these have often neglected to verify crystallinity of the HA coating, which is essential for optimal bioactivity. Furthermore, differing experimental and analytical methods make performance comparisons difficult. This article investigates and evaluates HA formation for four of the most promising surface functional groups: --OH, --SO(3)H, --PO(4)H(2) and --COOH. All of them successfully formed a HA coating at Ca/P ratios between 1.49 and 1.62. However, only the --SO(3)H and --COOH end groups produced a predominantly crystalline HA. Furthermore, the --COOH end group yielded the thickest layer and possessed crystalline characteristics very similar to that of the human bone. The --COOH end group appears to provide the optimal SAM surface interface for nucleation and growth of biomimetic crystalline HA. Intriguingly, this finding may lend support to explanations elsewhere of why human bone sialoprotein is such a potent nucleator of HA and is attributed to the protein's glutamic acid-rich sequences.

  7. Three-dimensional desirability spaces for quality-by-design-based HPLC development.

    PubMed

    Mokhtar, Hatem I; Abdel-Salam, Randa A; Hadad, Ghada M

    2015-04-01

    In this study, three-dimensional desirability spaces were introduced as a graphical representation method of design space. This was illustrated in the context of application of quality-by-design concepts on development of a stability indicating gradient reversed-phase high-performance liquid chromatography method for the determination of vinpocetine and α-tocopheryl acetate in a capsule dosage form. A mechanistic retention model to optimize gradient time, initial organic solvent concentration and ternary solvent ratio was constructed for each compound from six experimental runs. Then, desirability function of each optimized criterion and subsequently the global desirability function were calculated throughout the knowledge space. The three-dimensional desirability spaces were plotted as zones exceeding a threshold value of desirability index in space defined by the three optimized method parameters. Probabilistic mapping of desirability index aided selection of design space within the potential desirability subspaces. Three-dimensional desirability spaces offered better visualization and potential design spaces for the method as a function of three method parameters with ability to assign priorities to this critical quality as compared with the corresponding resolution spaces. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Multifactorial Experimental Design to Optimize the Anti-Inflammatory and Proangiogenic Potential of Mesenchymal Stem Cell Spheroids.

    PubMed

    Murphy, Kaitlin C; Whitehead, Jacklyn; Falahee, Patrick C; Zhou, Dejie; Simon, Scott I; Leach, J Kent

    2017-06-01

    Mesenchymal stem cell therapies promote wound healing by manipulating the local environment to enhance the function of host cells. Aggregation of mesenchymal stem cells (MSCs) into three-dimensional spheroids increases cell survival and augments their anti-inflammatory and proangiogenic potential, yet there is no consensus on the preferred conditions for maximizing spheroid function in this application. The objective of this study was to optimize conditions for forming MSC spheroids that simultaneously enhance their anti-inflammatory and proangiogenic nature. We applied a design of experiments (DOE) approach to determine the interaction between three input variables (number of cells per spheroid, oxygen tension, and inflammatory stimulus) on MSC spheroids by quantifying secretion of prostaglandin E 2 (PGE 2 ) and vascular endothelial growth factor (VEGF), two potent molecules in the MSC secretome. DOE results revealed that MSC spheroids formed with 40,000 cells per spheroid in 1% oxygen with an inflammatory stimulus (Spheroid 1) would exhibit enhanced PGE 2 and VEGF production versus those formed with 10,000 cells per spheroid in 21% oxygen with no inflammatory stimulus (Spheroid 2). Compared to Spheroid 2, Spheroid 1 produced fivefold more PGE 2 and fourfold more VEGF, providing the opportunity to simultaneously upregulate the secretion of these factors from the same spheroid. The spheroids induced macrophage polarization, sprout formation with endothelial cells, and keratinocyte migration in a human skin equivalent model-demonstrating efficacy on three key cell types that are dysfunctional in chronic non-healing wounds. We conclude that DOE-based analysis effectively identifies optimal culture conditions to enhance the anti-inflammatory and proangiogenic potential of MSC spheroids. Stem Cells 2017;35:1493-1504. © 2017 AlphaMed Press.

  9. Stiffness management of sheet metal parts using laser metal deposition

    NASA Astrophysics Data System (ADS)

    Bambach, Markus; Sviridov, Alexander; Weisheit, Andreas

    2017-10-01

    Tailored blanks are established solutions for the production of load-adapted sheet metal components. In the course of the individualization of production, such semi-finished products are gaining importance. In addition to tailored welded blanks and tailored rolled blanks, patchwork blanks have been developed which allow a local increase in sheet thickness by welding, gluing or soldering patches onto sheet metal blanks. Patchwork blanks, however, have several limitations, on the one hand, the limited freedom of design in the production of patchwork blanks and, on the other hand, the fact that there is no optimum material bonding with the substrate. The increasing production of derivative and special vehicles on the basis of standard vehicles, prototype production and the functionalization of components require solutions with which semi-finished products and sheet metal components can be provided flexibly with local thickenings or functional elements with a firm metallurgical bond to the substrate. An alternative to tailored and patchwork blanks is, therefore, a free-form reinforcement applied by additive manufacturing via laser metal deposition (LMD). By combining metal forming and additive manufacturing, stiffness can be adapted to the loads based on standard components in a material-efficient manner and without the need to redesign the forming tools. This paper details a study of the potential of stiffness management by LMD using a demonstrator part. Sizing optimization is performed and part distortion is taken into account to find an optimal design for the cladding. A maximum stiffness increase of 167% is feasible with only 4.7% additional mass. Avoiding part distortion leads to a pareto-optimal design which achieves 95% more stiffness with 6% added mass.

  10. Multifacet structure of observed reconstructed integral images.

    PubMed

    Martínez-Corral, Manuel; Javidi, Bahram; Martínez-Cuenca, Raúl; Saavedra, Genaro

    2005-04-01

    Three-dimensional images generated by an integral imaging system suffer from degradations in the form of grid of multiple facets. This multifacet structure breaks the continuity of the observed image and therefore reduces its visual quality. We perform an analysis of this effect and present the guidelines in the design of lenslet imaging parameters for optimization of viewing conditions with respect to the multifacet degradation. We consider the optimization of the system in terms of field of view, observer position and pupil function, lenslet parameters, and type of reconstruction. Numerical tests are presented to verify the theoretical analysis.

  11. Informed consent for MRI and fMRI research: Analysis of a sample of Canadian consent documents

    PubMed Central

    2011-01-01

    Background Research ethics and the measures deployed to ensure ethical oversight of research (e.g., informed consent forms, ethics review) are vested with extremely important ethical and practical goals. Accordingly, these measures need to function effectively in real-world research and to follow high level standards. Methods We examined approved consent forms for Magnetic Resonance Imaging (MRI) and functional Magnetic Resonance Imaging (fMRI) studies approved by Canadian research ethics boards (REBs). Results We found evidence of variability in consent forms in matters of physical and psychological risk reporting. Approaches used to tackle the emerging issue of incidental findings exposed extensive variability between and within research sites. Conclusion The causes of variability in approved consent forms and studies need to be better understood. However, mounting evidence of administrative and practical hurdles within current ethics governance systems combined with potential sub-optimal provision of information to and protection of research subjects support other calls for more scrutiny of research ethics practices and applicable revisions. PMID:21235768

  12. Portfolio Optimization of Nanomaterial Use in Clean Energy Technologies.

    PubMed

    Moore, Elizabeth A; Babbitt, Callie W; Gaustad, Gabrielle; Moore, Sean T

    2018-04-03

    While engineered nanomaterials (ENMs) are increasingly incorporated in diverse applications, risks of ENM adoption remain difficult to predict and mitigate proactively. Current decision-making tools do not adequately account for ENM uncertainties including varying functional forms, unique environmental behavior, economic costs, unknown supply and demand, and upstream emissions. The complexity of the ENM system necessitates a novel approach: in this study, the adaptation of an investment portfolio optimization model is demonstrated for optimization of ENM use in renewable energy technologies. Where a traditional investment portfolio optimization model maximizes return on investment through optimal selection of stock, ENM portfolio optimization maximizes the performance of energy technology systems by optimizing selective use of ENMs. Cumulative impacts of multiple ENM material portfolios are evaluated in two case studies: organic photovoltaic cells (OPVs) for renewable energy and lithium-ion batteries (LIBs) for electric vehicles. Results indicate ENM adoption is dependent on overall performance and variance of the material, resource use, environmental impact, and economic trade-offs. From a sustainability perspective, improved clean energy applications can help extend product lifespans, reduce fossil energy consumption, and substitute ENMs for scarce incumbent materials.

  13. Adiabatic Quantum Anomaly Detection and Machine Learning

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen; Lidar, Daniel

    2012-02-01

    We present methods of anomaly detection and machine learning using adiabatic quantum computing. The machine learning algorithm is a boosting approach which seeks to optimally combine somewhat accurate classification functions to create a unified classifier which is much more accurate than its components. This algorithm then becomes the first part of the larger anomaly detection algorithm. In the anomaly detection routine, we first use adiabatic quantum computing to train two classifiers which detect two sets, the overlap of which forms the anomaly class. We call this the learning phase. Then, in the testing phase, the two learned classification functions are combined to form the final Hamiltonian for an adiabatic quantum computation, the low energy states of which represent the anomalies in a binary vector space.

  14. Learning With Mixed Hard/Soft Pointwise Constraints.

    PubMed

    Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello

    2015-09-01

    A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.

  15. Relations between information, time, and value of water

    NASA Astrophysics Data System (ADS)

    Weijs, S. V.; Galindo, L. C.

    2015-12-01

    This research uses with stochastic dynamic programming (SDP) as a tool to reveal economic information about managed water resources. An application to the operation of an example hydropower reservoir is presented. SDP explicitly balances the marginal value of water for immediate use and its expected opportunity cost of not having more water available for future use. The result of an SDP analysis is a steady state policy, which gives the optimal decision as a function of the state. A commonly applied form gives the optimal release as a function of the month, current reservoir level and current inflow to the reservoir. The steady state policy can be complemented with a real-time management strategy, that can depend on more real-time information. An information-theoretical perspective is given on how this information influences the value of water, and how to deal with that influence in hydropower reservoir optimization. This results in some conjectures about how the information gain from real-time operation could affect the optimal long term policy. Another issue is the sharing of increased benefits that result from this information gain in a multi-objective setting. It is argued that this should be accounted for in negotiations about an operation policy.

  16. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy.

    PubMed

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.

  17. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    NASA Astrophysics Data System (ADS)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  18. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  19. Algorithms for Maneuvering Spacecraft Around Small Bodies

    NASA Technical Reports Server (NTRS)

    Acikmese, A. Bechet; Bayard, David

    2006-01-01

    A document describes mathematical derivations and applications of autonomous guidance algorithms for maneuvering spacecraft in the vicinities of small astronomical bodies like comets or asteroids. These algorithms compute fuel- or energy-optimal trajectories for typical maneuvers by solving the associated optimal-control problems with relevant control and state constraints. In the derivations, these problems are converted from their original continuous (infinite-dimensional) forms to finite-dimensional forms through (1) discretization of the time axis and (2) spectral discretization of control inputs via a finite number of Chebyshev basis functions. In these doubly discretized problems, the Chebyshev coefficients are the variables. These problems are, variously, either convex programming problems or programming problems that can be convexified. The resulting discrete problems are convex parameter-optimization problems; this is desirable because one can take advantage of very efficient and robust algorithms that have been developed previously and are well established for solving such problems. These algorithms are fast, do not require initial guesses, and always converge to global optima. Following the derivations, the algorithms are demonstrated by applying them to numerical examples of flyby, descent-to-hover, and ascent-from-hover maneuvers.

  20. Robust Spatial Approximation of Laser Scanner Point Clouds by Means of Free-form Curve Approaches in Deformation Analysis

    NASA Astrophysics Data System (ADS)

    Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo

    2016-03-01

    In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.

  1. Optimal control of dissipative nonlinear dynamical systems with triggers of coupled singularities

    NASA Astrophysics Data System (ADS)

    Stevanović Hedrih, K.

    2008-02-01

    This paper analyses the controllability of motion of nonconservative nonlinear dynamical systems in which triggers of coupled singularities exist or appear. It is shown that the phase plane method is useful for the analysis of nonlinear dynamics of nonconservative systems with one degree of freedom of control strategies and also shows the way it can be used for controlling the relative motion in rheonomic systems having equivalent scleronomic conservative or nonconservative system For the system with one generalized coordinate described by nonlinear differential equation of nonlinear dynamics with trigger of coupled singularities, the functions of system potential energy and conservative force must satisfy some conditions defined by a Theorem on the existence of a trigger of coupled singularities and the separatrix in the form of "an open a spiral form" of number eight. Task of the defined dynamical nonconservative system optimal control is: by using controlling force acting to the system, transfer initial state of the nonlinear dynamics of the system into the final state of the nonlinear dynamics in the minimal time for that optimal control task

  2. A risk-based multi-objective model for optimal placement of sensors in water distribution system

    NASA Astrophysics Data System (ADS)

    Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein

    2018-02-01

    In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.

  3. Improved Sensitivity Relations in State Constrained Optimal Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bettiol, Piernicola, E-mail: piernicola.bettiol@univ-brest.fr; Frankowska, Hélène, E-mail: frankowska@math.jussieu.fr; Vinter, Richard B., E-mail: r.vinter@imperial.ac.uk

    2015-04-15

    Sensitivity relations in optimal control provide an interpretation of the costate trajectory and the Hamiltonian, evaluated along an optimal trajectory, in terms of gradients of the value function. While sensitivity relations are a straightforward consequence of standard transversality conditions for state constraint free optimal control problems formulated in terms of control-dependent differential equations with smooth data, their verification for problems with either pathwise state constraints, nonsmooth data, or for problems where the dynamic constraint takes the form of a differential inclusion, requires careful analysis. In this paper we establish validity of both ‘full’ and ‘partial’ sensitivity relations for an adjointmore » state of the maximum principle, for optimal control problems with pathwise state constraints, where the underlying control system is described by a differential inclusion. The partial sensitivity relation interprets the costate in terms of partial Clarke subgradients of the value function with respect to the state variable, while the full sensitivity relation interprets the couple, comprising the costate and Hamiltonian, as the Clarke subgradient of the value function with respect to both time and state variables. These relations are distinct because, for nonsmooth data, the partial Clarke subdifferential does not coincide with the projection of the (full) Clarke subdifferential on the relevant coordinate space. We show for the first time (even for problems without state constraints) that a costate trajectory can be chosen to satisfy the partial and full sensitivity relations simultaneously. The partial sensitivity relation in this paper is new for state constraint problems, while the full sensitivity relation improves on earlier results in the literature (for optimal control problems formulated in terms of Lipschitz continuous multifunctions), because a less restrictive inward pointing hypothesis is invoked in the proof, and because it is validated for a stronger set of necessary conditions.« less

  4. ωB97M-V: A combinatorially optimized, range-separated hybrid, meta-GGA density functional with VV10 nonlocal correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin

    2016-06-07

    A combinatorially optimized, range-separated hybrid, meta-GGA density functional with VV10 nonlocal correlation is presented in this paper. The final 12-parameter functional form is selected from approximately 10 × 10 9 candidate fits that are trained on a training set of 870 data points and tested on a primary test set of 2964 data points. The resulting density functional, ωB97M-V, is further tested for transferability on a secondary test set of 1152 data points. For comparison, ωB97M-V is benchmarked against 11 leading density functionals including M06-2X, ωB97X-D, M08-HX, M11, ωM05-D, ωB97X-V, and MN15. Encouragingly, the overall performance of ωB97M-V on nearlymore » 5000 data points clearly surpasses that of all of the tested density functionals. Finally, in order to facilitate the use of ωB97M-V, its basis set dependence and integration grid sensitivity are thoroughly assessed, and recommendations that take into account both efficiency and accuracy are provided.« less

  5. [The isozymes of stearil-coenzymeA-desaturase and insulin activity in the light of phylogenetic theory of pathology. Oleic fatty acid and realization of biologic functions of trophology and locomotion].

    PubMed

    2013-11-01

    The formation of function of isozymes of stearil-coenzymeA-desaturases occured at the different stages of phylogeny under realization of biologic function of trophology (stearil-coenzymeA-desaturase 1) and biologic function of locomotion, insulin system (stearil-coenzymeA-desaturase 2) billions years later. The stearil-coenzymeA-desaturase 1 transforms in C 18:1 oleic fatty acid only exogenous C 16:0 palmitinic saturated fatty acid. The stearil-coenzymeA-desaturase 2 transforms only endogenic palmitinic saturated fatty acid, synthesized form glucose. The biologic role of insulin is in energy support of biologic function of locomotion. Insulin through expressing stearil-coenzymeA-desaturase 2 transforms energetically non-optimal palmitinic variation of metabolism of substrates into highly effective oleic variation for cells' groundwork of energy (saturated fatty acid and mono fatty acid). The surplus of palmitinic saturated fatty acid in food is enabled in pathogenesis of resistance to insulin and derangement of synthesis of hormone by beta-cells of islets. The resistance to insulin and diabetes mellitus are primarily the derangement of metabolism of saturated fatty acids with mono fatty acids, energy problems of organism and only afterwards the derangement of metabolism of carbohydrates. It is desirable to restrict food intake of exogenous palmitinic saturated fatty acid. The reasons are low expression of independent of insulin stearil-coenzymeA-desaturase 2, marked lipotoxicity of polar form of palmitinic saturated fatty acid and synthesis of non-optimal palmitinic triglycerides instead of physiologic and more energetically more effective oleic triglycerides.

  6. $L^1$ penalization of volumetric dose objectives in optimal control of PDEs

    DOE PAGES

    Barnard, Richard C.; Clason, Christian

    2017-02-11

    This work is concerned with a class of PDE-constrained optimization problems that are motivated by an application in radiotherapy treatment planning. Here the primary design objective is to minimize the volume where a functional of the state violates a prescribed level, but prescribing these levels in the form of pointwise state constraints leads to infeasible problems. We therefore propose an alternative approach based on L 1 penalization of the violation that is also applicable when state constraints are infeasible. We establish well-posedness of the corresponding optimal control problem, derive first-order optimality conditions, discuss convergence of minimizers as the penalty parametermore » tends to infinity, and present a semismooth Newton method for their efficient numerical solution. Finally, the performance of this method for a model problem is illustrated and contrasted with an alternative approach based on (regularized) state constraints.« less

  7. New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems

    PubMed Central

    Li, Xiguang; Zhao, Liang; Gong, Changqing; Liu, Xiaojing

    2017-01-01

    Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA), is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM) for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent. PMID:29085425

  8. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  9. The universal scissor component: Optimization of a reconfigurable component for deployable scissor structures

    NASA Astrophysics Data System (ADS)

    Alegria Mira, Lara; Thrall, Ashley P.; De Temmerman, Niels

    2016-02-01

    Deployable scissor structures are well equipped for temporary and mobile applications since they are able to change their form and functionality. They are structural mechanisms that transform from a compact state to an expanded, fully deployed configuration. A barrier to the current design and reuse of scissor structures, however, is that they are traditionally designed for a single purpose. Alternatively, a universal scissor component (USC)-a generalized element which can achieve all traditional scissor types-introduces an opportunity for reuse in which the same component can be utilized for different configurations and spans. In this article, the USC is optimized for structural performance. First, an optimized length for the USC is determined based on a trade-off between component weight and structural performance (measured by deflections). Then, topology optimization, using the simulated annealing algorithm, is implemented to determine a minimum weight layout of beams within a single USC component.

  10. A tool for efficient, model-independent management optimization under uncertainty

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  11. Development of Activity-based Cost Functions for Cellulase, Invertase, and Other Enzymes

    NASA Astrophysics Data System (ADS)

    Stowers, Chris C.; Ferguson, Elizabeth M.; Tanner, Robert D.

    As enzyme chemistry plays an increasingly important role in the chemical industry, cost analysis of these enzymes becomes a necessity. In this paper, we examine the aspects that affect the cost of enzymes based upon enzyme activity. The basis for this study stems from a previously developed objective function that quantifies the tradeoffs in enzyme purification via the foam fractionation process (Cherry et al., Braz J Chem Eng 17:233-238, 2000). A generalized cost function is developed from our results that could be used to aid in both industrial and lab scale chemical processing. The generalized cost function shows several nonobvious results that could lead to significant savings. Additionally, the parameters involved in the operation and scaling up of enzyme processing could be optimized to minimize costs. We show that there are typically three regimes in the enzyme cost analysis function: the low activity prelinear region, the moderate activity linear region, and high activity power-law region. The overall form of the cost analysis function appears to robustly fit the power law form.

  12. New optimization scheme to obtain interaction potentials for oxide glasses

    NASA Astrophysics Data System (ADS)

    Sundararaman, Siddharth; Huang, Liping; Ispas, Simona; Kob, Walter

    2018-05-01

    We propose a new scheme to parameterize effective potentials that can be used to simulate atomic systems such as oxide glasses. As input data for the optimization, we use the radial distribution functions of the liquid and the vibrational density of state of the glass, both obtained from ab initio simulations, as well as experimental data on the pressure dependence of the density of the glass. For the case of silica, we find that this new scheme facilitates finding pair potentials that are significantly more accurate than the previous ones even if the functional form is the same, thus demonstrating that even simple two-body potentials can be superior to more complex three-body potentials. We have tested the new potential by calculating the pressure dependence of the elastic moduli and found a good agreement with the corresponding experimental data.

  13. Electro-thermal battery model identification for automotive applications

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.

    This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.

  14. Clues for biomimetics from natural composite materials

    PubMed Central

    Lapidot, Shaul; Meirovitch, Sigal; Sharon, Sigal; Heyman, Arnon; Kaplan, David L; Shoseyov, Oded

    2013-01-01

    Bio-inspired material systems are derived from different living organisms such as plants, arthropods, mammals and marine organisms. These biomaterial systems from nature are always present in the form of composites, with molecular-scale interactions optimized to direct functional features. With interest in replacing synthetic materials with natural materials due to biocompatibility, sustainability and green chemistry issues, it is important to understand the molecular structure and chemistry of the raw component materials to also learn from their natural engineering, interfaces and interactions leading to durable and highly functional material architectures. This review will focus on applications of biomaterials in single material forms, as well as biomimetic composites inspired by natural organizational features. Examples of different natural composite systems will be described, followed by implementation of the principles underlying their composite organization into artificial bio-inspired systems for materials with new functional features for future medicine. PMID:22994958

  15. Clues for biomimetics from natural composite materials.

    PubMed

    Lapidot, Shaul; Meirovitch, Sigal; Sharon, Sigal; Heyman, Arnon; Kaplan, David L; Shoseyov, Oded

    2012-09-01

    Bio-inspired material systems are derived from different living organisms such as plants, arthropods, mammals and marine organisms. These biomaterial systems from nature are always present in the form of composites, with molecular-scale interactions optimized to direct functional features. With interest in replacing synthetic materials with natural materials due to biocompatibility, sustainability and green chemistry issues, it is important to understand the molecular structure and chemistry of the raw component materials to also learn from their natural engineering, interfaces and interactions leading to durable and highly functional material architectures. This review will focus on applications of biomaterials in single material forms, as well as biomimetic composites inspired by natural organizational features. Examples of different natural composite systems will be described, followed by implementation of the principles underlying their composite organization into artificial bio-inspired systems for materials with new functional features for future medicine.

  16. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography

    PubMed Central

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-01-01

    Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290

  17. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    NASA Astrophysics Data System (ADS)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  18. Modelling Schumann resonances from ELF measurements using non-linear optimization methods

    NASA Astrophysics Data System (ADS)

    Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo

    2017-04-01

    Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.

  19. Optimal vaccination strategies and rational behaviour in seasonal epidemics.

    PubMed

    Doutor, Paulo; Rodrigues, Paula; Soares, Maria do Céu; Chalub, Fabio A C C

    2016-12-01

    We consider a SIRS model with time dependent transmission rate. We assume time dependent vaccination which confers the same immunity as natural infection. We study two types of vaccination strategies: (i) optimal vaccination, in the sense that it minimizes the effort of vaccination in the set of vaccination strategies for which, for any sufficiently small perturbation of the disease free state, the number of infectious individuals is monotonically decreasing; (ii) Nash-equilibria strategies where all individuals simultaneously minimize the joint risk of vaccination versus the risk of the disease. The former case corresponds to an optimal solution for mandatory vaccinations, while the second corresponds to the equilibrium to be expected if vaccination is fully voluntary. We are able to show the existence of both optimal and Nash strategies in a general setting. In general, these strategies will not be functions but Radon measures. For specific forms of the transmission rate, we provide explicit formulas for the optimal and the Nash vaccination strategies.

  20. Optimal control of vaccination rate in an epidemiological model of Clostridium difficile transmission.

    PubMed

    Stephenson, Brittany; Lanzas, Cristina; Lenhart, Suzanne; Day, Judy

    2017-12-01

    The spore-forming, gram-negative bacteria Clostridium difficile can cause severe intestinal illness. A striking increase in the number of cases of C. difficile infection (CDI) among hospitals has highlighted the need to better understand how to prevent the spread of CDI. In our paper, we modify and update a compartmental model of nosocomial C. difficile transmission to include vaccination. We then apply optimal control theory to determine the time-varying optimal vaccination rate that minimizes a combination of disease prevalence and spread in the hospital population as well as cost, in terms of time and money, associated with vaccination. Various hospital scenarios are considered, such as times of increased antibiotic prescription rate and times of outbreak, to see how such scenarios modify the optimal vaccination rate. By comparing the values of the objective functional with constant vaccination rates to those with time-varying optimal vaccination rates, we illustrate the benefits of time-varying controls.

  1. A Swarm Optimization Genetic Algorithm Based on Quantum-Behaved Particle Swarm Optimization.

    PubMed

    Sun, Tao; Xu, Ming-Hai

    2017-01-01

    Quantum-behaved particle swarm optimization (QPSO) algorithm is a variant of the traditional particle swarm optimization (PSO). The QPSO that was originally developed for continuous search spaces outperforms the traditional PSO in search ability. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by introducing the rejection region, thus proposing a new binary algorithm, named swarm optimization genetic algorithm (SOGA), because it is more like genetic algorithm (GA) than PSO in form. SOGA has crossover and mutation operator as GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. The proposed algorithm was tested with several nonlinear high-dimension functions in the binary search space, and the results were compared with those from BPSO, BQPSO, and GA. The experimental results show that SOGA is distinctly superior to the other three algorithms in terms of solution accuracy and convergence.

  2. Self-consistent adjoint analysis for topology optimization of electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Deng, Yongbo; Korvink, Jan G.

    2018-05-01

    In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.

  3. [Public relations in institutions and establishments of the health administration system].

    PubMed

    Martynenko, A V

    2002-01-01

    The article is dedicated to development of directions and specific functions of the health system bodies/institutions public relations (PR) activities. Priorities are set forth depending on the form of property thereof. A complex use of approaches toward carrying out of PR activities permits optimizing work both within the system itself and relations with the society as a whole.

  4. "Efficiency Space" - A Framework for Evaluating Joint Evaporation and Runoff Behavior

    NASA Technical Reports Server (NTRS)

    Koster, Randal

    2014-01-01

    At the land surface, higher soil moisture levels generally lead to both increased evaporation for a given amount of incoming radiation (increased evaporation efficiency) and increased runoff for a given amount of precipitation (increased runoff efficiency). Evaporation efficiency and runoff efficiency can thus be said to vary with each other, motivating the development of a unique hydroclimatic analysis framework. Using a simple water balance model fitted, in different experiments, with a wide variety of functional forms for evaporation and runoff efficiency, we transform net radiation and precipitation fields into fields of streamflow that can be directly evaluated against observations. The optimal combination of the functional forms the combination that produces the most skillful stream-flow simulations provides an indication for how evaporation and runoff efficiencies vary with each other in nature, a relationship that can be said to define the overall character of land surface hydrological processes, at least to first order. The inferred optimal relationship is represented herein as a curve in efficiency space and should be valuable for the evaluation and development of GCM-based land surface models, which by this measure are often found to be suboptimal.

  5. Visual Tracking Based on Extreme Learning Machine and Sparse Representation

    PubMed Central

    Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen

    2015-01-01

    The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359

  6. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  7. A Bayesian model averaging method for the derivation of reservoir operating rules

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai

    2015-09-01

    Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.

  8. Integrating Test-Form Formatting into Automated Test Assembly

    ERIC Educational Resources Information Center

    Diao, Qi; van der Linden, Wim J.

    2013-01-01

    Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…

  9. Calibrating the Spatiotemporal Root Density Distribution for Macroscopic Water Uptake Models Using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Li, N.; Yue, X. Y.

    2018-03-01

    Macroscopic root water uptake models proportional to a root density distribution function (RDDF) are most commonly used to model water uptake by plants. As the water uptake is difficult and labor intensive to measure, these models are often calibrated by inverse modeling. Most previous inversion studies assume RDDF to be constant with depth and time or dependent on only depth for simplification. However, under field conditions, this function varies with type of soil and root growth and thus changes with both depth and time. This study proposes an inverse method to calibrate both spatially and temporally varying RDDF in unsaturated water flow modeling. To overcome the difficulty imposed by the ill-posedness, the calibration is formulated as an optimization problem in the framework of the Tikhonov regularization theory, adding additional constraint to the objective function. Then the formulated nonlinear optimization problem is numerically solved with an efficient algorithm on the basis of the finite element method. The advantage of our method is that the inverse problem is translated into a Tikhonov regularization functional minimization problem and then solved based on the variational construction, which circumvents the computational complexity in calculating the sensitivity matrix involved in many derivative-based parameter estimation approaches (e.g., Levenberg-Marquardt optimization). Moreover, the proposed method features optimization of RDDF without any prior form, which is applicable to a more general root water uptake model. Numerical examples are performed to illustrate the applicability and effectiveness of the proposed method. Finally, discussions on the stability and extension of this method are presented.

  10. Empirical scoring functions for advanced protein-ligand docking with PLANTS.

    PubMed

    Korb, Oliver; Stützle, Thomas; Exner, Thomas E

    2009-01-01

    In this paper we present two empirical scoring functions, PLANTS(CHEMPLP) and PLANTS(PLP), designed for our docking algorithm PLANTS (Protein-Ligand ANT System), which is based on ant colony optimization (ACO). They are related, regarding their functional form, to parts of already published scoring functions and force fields. The parametrization procedure described here was able to identify several parameter settings showing an excellent performance for the task of pose prediction on two test sets comprising 298 complexes in total. Up to 87% of the complexes of the Astex diverse set and 77% of the CCDC/Astex clean listnc (noncovalently bound complexes of the clean list) could be reproduced with root-mean-square deviations of less than 2 A with respect to the experimentally determined structures. A comparison with the state-of-the-art docking tool GOLD clearly shows that this is, especially for the druglike Astex diverse set, an improvement in pose prediction performance. Additionally, optimized parameter settings for the search algorithm were identified, which can be used to balance pose prediction reliability and search speed.

  11. Differentiable McCormick relaxations

    DOE PAGES

    Khan, Kamil A.; Watson, Harry A. J.; Barton, Paul I.

    2016-05-27

    McCormick's classical relaxation technique constructs closed-form convex and concave relaxations of compositions of simple intrinsic functions. These relaxations have several properties which make them useful for lower bounding problems in global optimization: they can be evaluated automatically, accurately, and computationally inexpensively, and they converge rapidly to the relaxed function as the underlying domain is reduced in size. They may also be adapted to yield relaxations of certain implicit functions and differential equation solutions. However, McCormick's relaxations may be nonsmooth, and this nonsmoothness can create theoretical and computational obstacles when relaxations are to be deployed. This article presents a continuously differentiablemore » variant of McCormick's original relaxations in the multivariate McCormick framework of Tsoukalas and Mitsos. Gradients of the new differentiable relaxations may be computed efficiently using the standard forward or reverse modes of automatic differentiation. Furthermore, extensions to differentiable relaxations of implicit functions and solutions of parametric ordinary differential equations are discussed. A C++ implementation based on the library MC++ is described and applied to a case study in nonsmooth nonconvex optimization.« less

  12. Broken symmetry in a two-qubit quantum control landscape

    NASA Astrophysics Data System (ADS)

    Bukov, Marin; Day, Alexandre G. R.; Weinberg, Phillip; Polkovnikov, Anatoli; Mehta, Pankaj; Sels, Dries

    2018-05-01

    We analyze the physics of optimal protocols to prepare a target state with high fidelity in a symmetrically coupled two-qubit system. By varying the protocol duration, we find a discontinuous phase transition, which is characterized by a spontaneous breaking of a Z2 symmetry in the functional form of the optimal protocol, and occurs below the quantum speed limit. We study in detail this phase and demonstrate that even though high-fidelity protocols come degenerate with respect to their fidelity, they lead to final states of different entanglement entropy shared between the qubits. Consequently, while globally both optimal protocols are equally far away from the target state, one is locally closer than the other. An approximate variational mean-field theory which captures the physics of the different phases is developed.

  13. Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems

    PubMed Central

    Chen, Sanfeng; Li, Shuai; Liu, Bo; Lou, Yuesheng; Liang, Yongsheng

    2012-01-01

    Variable structure strategy is widely used for the control of sensor-actuator systems modeled by Euler-Lagrange equations. However, accurate knowledge on the model structure and model parameters are often required for the control design. In this paper, we consider model-free variable structure control of a class of sensor-actuator systems, where only the online input and output of the system are available while the mathematic model of the system is unknown. The problem is formulated from an optimal control perspective and the implicit form of the control law are analytically obtained by using the principle of optimality. The control law and the optimal cost function are explicitly solved iteratively. Simulations demonstrate the effectiveness and the efficiency of the proposed method. PMID:22778633

  14. Combinatorial materials synthesis and high-throughput screening: an integrated materials chip approach to mapping phase diagrams and discovery and optimization of functional materials.

    PubMed

    Xiang, X D

    Combinatorial materials synthesis methods and high-throughput evaluation techniques have been developed to accelerate the process of materials discovery and optimization and phase-diagram mapping. Analogous to integrated circuit chips, integrated materials chips containing thousands of discrete different compositions or continuous phase diagrams, often in the form of high-quality epitaxial thin films, can be fabricated and screened for interesting properties. Microspot x-ray method, various optical measurement techniques, and a novel evanescent microwave microscope have been used to characterize the structural, optical, magnetic, and electrical properties of samples on the materials chips. These techniques are routinely used to discover/optimize and map phase diagrams of ferroelectric, dielectric, optical, magnetic, and superconducting materials.

  15. Formulation, functional evaluation and ex vivo performance of thermoresponsive soluble gels - A platform for therapeutic delivery to mucosal sinus tissue.

    PubMed

    Pandey, Preeti; Cabot, Peter J; Wallwork, Benjamin; Panizza, Benedict J; Parekh, Harendra S

    2017-01-01

    Mucoadhesive in situ gelling systems (soluble gels) have received considerable attention recently as effective stimuli-transforming vectors for a range of drug delivery applications. Considering this fact, the present work involves systematic formulation development, optimization, functional evaluation and ex vivo performance of thermosensitive soluble gels containing dexamethasone 21-phosphate disodium salt (DXN) as the model therapeutic. A series of in situ gel-forming systems comprising the thermoreversible polymer poloxamer-407 (P407), along with hydroxypropyl methyl cellulose (HPMC) and chitosan were first formulated. The optimized soluble gels were evaluated for their potential to promote greater retention at the mucosal surface, for improved therapeutic efficacy, compared to existing solution/suspension-based steroid formulations used clinically. Optimized soluble gels demonstrated a desirable gelation temperature with Newtonian fluid behaviour observed under storage conditions (4-8°C), and pseudoplastic fluid behaviour recorded at nasal cavity/sinus temperature (≈34°C). The in vitro characterization of formulations including rheological evaluation, textural analysis and mucoadhesion studies of the gel form were investigated. Considerable improvement in mechanical properties and mucoadhesion was observed with incorporation of HPMC and chitosan into the gelling systems. The lead poloxamer-based soluble gels, PGHC4 and PGHC7, which were carried through to ex vivo permeation studies displayed extended drug release profiles in conditions mimicking the human nasal cavity, which indicates their suitability for treating a range of conditions affecting the nasal cavity/sinuses. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Design optimization of a smooth headlamp reflector to SAE/DOT beam-shape requirements

    NASA Astrophysics Data System (ADS)

    Shatz, Narkis E.; Bortz, John C.; Dassanayake, Mahendra S.

    1999-10-01

    The optical design of Ford Motor Company's 1992 Mercury Grand Marquis headlamp utilized a Sylvania 9007 filament source, a paraboloidal reflector and an array of cylindrical lenses (flutes). It has been of interest to Ford to determine the practicality of closely reproducing the on- road beam pattern performance of this headlamp, with an alternate optical arrangement whereby the control of the beam would be achieved solely by means of the geometry of the surface of the reflector, subject to a requirement of smooth-surface continuity; replacing the outer lens with a clear plastic cover having no beam-forming function. To this end the far-field intensity distribution produced by the 9007 bulb was measured at the low-beam setting. These measurements were then used to develop a light-source model for use in ray tracing simulations of candidate reflector geometries. An objective function was developed to compare candidate beam patterns with the desired beam pattern. Functional forms for the 3D reflector geometry were developed with free parameters to be subsequently optimized. A solution was sought meeting the detailed US SAE/DOT constraints for minimum and maximum permissible levels of illumination in the different portions of the beam pattern. Simulated road scenes were generated by Ford Motor Company to compare the illumination properties of the new design with those of the original Grand Marquis headlamp.

  17. Optimization of an idealized Y-Shaped Extracardiac Fontan Baffle

    NASA Astrophysics Data System (ADS)

    Yang, Weiguang; Feinstein, Jeffrey; Mohan Reddy, V.; Marsden, Alison

    2008-11-01

    Research has showed that vascular geometries can significantly impact hemodynamic performance, particularly in pediatric cardiology, where anatomy varies from one patient to another. In this study we optimize a newly proposed design for the Fontan procedure, a surgery used to treat single ventricle heart patients. The current Fontan procedure connects the inferior vena cava (IVC) to the pulmonary arteries (PA's) via a straight Gore-Tex tube, forming a T-shaped junction. In the Y-graft design, the IVC is connected to the left and right PAs by two branches. Initial studies on the Y-graft design showed an increase in efficiency and improvement in flow distribution compared to traditional designs in a single patient-specific model. We now optimize an idealized Y-graft model to refine the design prior to patient testing. A derivate-free optimization algorithm using Kriging surrogate functions and mesh adaptive direct search is coupled to a 3-D finite element Navier-Stokes solver. We will present optimization results for rest and exercise conditions and examine the influence of energy efficiency, wall shear stress, pulsatile flow, and flow distribution on the optimal design.

  18. A computational NMR study on zigzag aluminum nitride nanotubes

    NASA Astrophysics Data System (ADS)

    Bodaghi, Ali; Mirzaei, Mahmoud; Seif, Ahmad; Giahi, Masoud

    2008-12-01

    A computational nuclear magnetic resonance (NMR) study is performed to investigate the electronic structure properties of the single-walled zigzag aluminum nitride nanotubes (AlNNTs). The chemical-shielding (CS) tensors are calculated at the sites of Al-27 and N-15 nuclei in three structural forms of AlNNT including H-saturated, Al-terminated, and N-terminated ones. The structural forms are firstly optimized and then the calculated CS tensors in the optimized structures are converted to chemical-shielding isotropic (CSI) and chemical-shielding anisotropic (CSA) parameters. The calculated parameters reveal that various Al-27 and N-15 nuclei are divided into some layers with equivalent electrostatic properties; furthermore, Al and N can act as Lewis base and acid, respectively. In the Al-terminated and N-terminated forms of AlNNT, in which one mouth of the nanotube is terminated by aluminum and nitrogen nuclei, respectively, just the CS tensors of the nearest nuclei to the mouth of the nanotube are significantly changed due to removal of saturating hydrogen atoms. Density functional theory (DFT) calculations are performed using GAUSSIAN 98 package of program.

  19. Inconsistent Investment and Consumption Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kronborg, Morten Tolver, E-mail: mtk@atp.dk; Steffensen, Mogens, E-mail: mogens@math.ku.dk

    In a traditional Black–Scholes market we develop a verification theorem for a general class of investment and consumption problems where the standard dynamic programming principle does not hold. The theorem is an extension of the standard Hamilton–Jacobi–Bellman equation in the form of a system of non-linear differential equations. We derive the optimal investment and consumption strategy for a mean-variance investor without pre-commitment endowed with labor income. In the case of constant risk aversion it turns out that the optimal amount of money to invest in stocks is independent of wealth. The optimal consumption strategy is given as a deterministic bang-bangmore » strategy. In order to have a more realistic model we allow the risk aversion to be time and state dependent. Of special interest is the case were the risk aversion is inversely proportional to present wealth plus the financial value of future labor income net of consumption. Using the verification theorem we give a detailed analysis of this problem. It turns out that the optimal amount of money to invest in stocks is given by a linear function of wealth plus the financial value of future labor income net of consumption. The optimal consumption strategy is again given as a deterministic bang-bang strategy. We also calculate, for a general time and state dependent risk aversion function, the optimal investment and consumption strategy for a mean-standard deviation investor without pre-commitment. In that case, it turns out that it is optimal to take no risk at all.« less

  20. Towards programmable plant genetic circuits.

    PubMed

    Medford, June I; Prasad, Ashok

    2016-07-01

    Synthetic biology enables the construction of genetic circuits with predictable gene functions in plants. Detailed quantitative descriptions of the transfer function or input-output function for genetic parts (promoters, 5' and 3' untranslated regions, etc.) are collected. These data are then used in computational simulations to determine their robustness and desired properties, thereby enabling the best components to be selected for experimental testing in plants. In addition, the process forms an iterative workflow which allows vast improvement to validated elements with sub-optimal function. These processes enable computational functions such as digital logic in living plants and follow the pathway of technological advances which took us from vacuum tubes to cell phones. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.

  1. Power-Aware Intrusion Detection in Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Şen, Sevil; Clark, John A.; Tapiador, Juan E.

    Mobile ad hoc networks (MANETs) are a highly promising new form of networking. However they are more vulnerable to attacks than wired networks. In addition, conventional intrusion detection systems (IDS) are ineffective and inefficient for highly dynamic and resource-constrained environments. Achieving an effective operational MANET requires tradeoffs to be made between functional and non-functional criteria. In this paper we show how Genetic Programming (GP) together with a Multi-Objective Evolutionary Algorithm (MOEA) can be used to synthesise intrusion detection programs that make optimal tradeoffs between security criteria and the power they consume.

  2. Determining A Purely Symbolic Transfer Function from Symbol Streams: Theory and Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Christopher H

    Transfer function modeling is a \\emph{standard technique} in classical Linear Time Invariant and Statistical Process Control. The work of Box and Jenkins was seminal in developing methods for identifying parameters associated with classicalmore » $(r,s,k)$$ transfer functions. Discrete event systems are often \\emph{used} for modeling hybrid control structures and high-level decision problems. \\emph{Examples include} discrete time, discrete strategy repeated games. For these games, a \\emph{discrete transfer function in the form of} an accurate hidden Markov model of input-output relations \\emph{could be used to derive optimal response strategies.} In this paper, we develop an algorithm \\emph{for} creating probabilistic \\textit{Mealy machines} that act as transfer function models for discrete event dynamic systems (DEDS). Our models are defined by three parameters, $$(l_1, l_2, k)$ just as the Box-Jenkins transfer function models. Here $$l_1$$ is the maximal input history lengths to consider, $$l_2$$ is the maximal output history lengths to consider and $k$ is the response lag. Using related results, We show that our Mealy machine transfer functions are optimal in the sense that they maximize the mutual information between the current known state of the DEDS and the next observed input/output pair.« less

  3. Some Insights of Spectral Optimization in Ocean Color Inversion

    NASA Technical Reports Server (NTRS)

    Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert

    2011-01-01

    In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.

  4. Study of the Bellman equation in a production model with unstable demand

    NASA Astrophysics Data System (ADS)

    Obrosova, N. K.; Shananin, A. A.

    2014-09-01

    A production model with allowance for a working capital deficit and a restricted maximum possible sales volume is proposed and analyzed. The study is motivated by the urgency of analyzing well-known problems of functioning low competitive macroeconomic structures. The original formulation of the task represents an infinite-horizon optimal control problem. As a result, the model is formalized in the form of a Bellman equation. It is proved that the corresponding Bellman operator is a contraction and has a unique fixed point in the chosen class of functions. A closed-form solution of the Bellman equation is found using the method of steps. The influence of the credit interest rate on the firm market value assessment is analyzed by applying the developed model.

  5. Product modular design incorporating preventive maintenance issues

    NASA Astrophysics Data System (ADS)

    Gao, Yicong; Feng, Yixiong; Tan, Jianrong

    2016-03-01

    Traditional modular design methods lead to product maintenance problems, because the module form of a system is created according to either the function requirements or the manufacturing considerations. For solving these problems, a new modular design method is proposed with the considerations of not only the traditional function related attributes, but also the maintenance related ones. First, modularity parameters and modularity scenarios for product modularity are defined. Then the reliability and economic assessment models of product modularity strategies are formulated with the introduction of the effective working age of modules. A mathematical model used to evaluate the difference among the modules of the product so that the optimal module of the product can be established. After that, a multi-objective optimization problem based on metrics for preventive maintenance interval different degrees and preventive maintenance economics is formulated for modular optimization. Multi-objective GA is utilized to rapidly approximate the Pareto set of optimal modularity strategy trade-offs between preventive maintenance cost and preventive maintenance interval difference degree. Finally, a coordinate CNC boring machine is adopted to depict the process of product modularity. In addition, two factorial design experiments based on the modularity parameters are constructed and analyzed. These experiments investigate the impacts of these parameters on the optimal modularity strategies and the structure of module. The research proposes a new modular design method, which may help to improve the maintainability of product in modular design.

  6. First and second order approximations to stage numbers in multicomponent enrichment cascades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scopatz, A.

    2013-07-01

    This paper describes closed form, Taylor series approximations to the number product stages in a multicomponent enrichment cascade. Such closed form approximations are required when a symbolic, rather than a numeric, algorithm is used to compute the optimal cascade state. Both first and second order approximations were implemented. The first order solution was found to be grossly incorrect, having the wrong functional form over the entire domain. On the other hand, the second order solution shows excellent agreement with the 'true' solution over the domain of interest. An implementation of the symbolic, second order solver is available in the freemore » and open source PyNE library. (authors)« less

  7. Phase demodulation method from a single fringe pattern based on correlation with a polynomial form.

    PubMed

    Robin, Eric; Valle, Valéry; Brémand, Fabrice

    2005-12-01

    The method presented extracts the demodulated phase from only one fringe pattern. Locally, this method approaches the fringe pattern morphology with the help of a mathematical model. The degree of similarity between the mathematical model and the real fringe is estimated by minimizing a correlation function. To use an optimization process, we have chosen a polynomial form such as a mathematical model. However, the use of a polynomial form induces an identification procedure with the purpose of retrieving the demodulated phase. This method, polynomial modulated phase correlation, is tested on several examples. Its performance, in terms of speed and precision, is presented on very noised fringe patterns.

  8. Theoretical study on the vibrational spectra of methoxy- and formyl-dihydroxy- trans-stilbenes and their hydrolytic equilibria

    NASA Astrophysics Data System (ADS)

    Molnár, Viktor; Billes, Ferenc; Tyihák, Ernő; Mikosch, Hans

    2008-02-01

    Compounds formed by exchanging one of the resveratrol hydroxy groups to methoxy or formyl groups are biologically important. Quantum chemical DFT calculations were applied for the simulation of some of their properties. Their optimized structures and charge distributions were computed. Based on the calculated vibrational force constants and optimized molecular structure infrared and Raman spectra were calculated. The characteristics of the vibrational modes were determined by normal coordinate analysis. Applying the calculated thermodynamic functions also for resveratrol, methanol, formaldehyde and water, thermodynamic equilibria were calculated for the equilibria between resveratrol and its methyl and formyl substituted derivatives, respectively.

  9. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  10. Mechanistic analysis of Zein nanoparticles/PLGA triblock in situ forming implants for glimepiride.

    PubMed

    Ahmed, Osama Abdelhakim Aly; Zidan, Ahmed Samir; Khayat, Maan

    2016-01-01

    The study aims at applying pharmaceutical nanotechnology and D-optimal fractional factorial design to screen and optimize the high-risk variables affecting the performance of a complex drug delivery system consisting of glimepiride-Zein nanoparticles and inclusion of the optimized formula with thermoresponsive triblock copolymers in in situ gel. Sixteen nanoparticle formulations were prepared by liquid-liquid phase separation method according to the D-optimal fractional factorial design encompassing five variables at two levels. The responses investigated were glimepiride entrapment capacity (EC), particle size and size distribution, zeta potential, and in vitro drug release from the prepared nanoparticles. Furthermore, the feasibility of embedding the optimized Zein-based glimepiride nanoparticles within thermoresponsive triblock copolymers poly(lactide-co-glycolide)-block-poly(ethylene glycol)-block-poly(lactide-co-glycolide) in in situ gel was evaluated for controlling glimepiride release rate. Through the systematic optimization phase, improvement of glimepiride EC of 33.6%, nanoparticle size of 120.9 nm with a skewness value of 0.2, zeta potential of 11.1 mV, and sustained release features of 3.3% and 17.3% drug released after 2 and 24 hours, respectively, were obtained. These desirability functions were obtained at Zein and glimepiride loadings of 50 and 75 mg, respectively, utilizing didodecyldimethylammonium bromide as a stabilizer at 0.1% and 90% ethanol as a common solvent. Moreover, incorporating this optimized formulation in triblock copolymers-based in situ gel demonstrated pseudoplastic behavior with reduction of drug release rate as the concentration of polymer increased. This approach to control the release of glimepiride using Zein nanoparticles/triblock copolymers-based in situ gel forming intramuscular implants could be useful for improving diabetes treatment effectiveness.

  11. Up-cycling waste glass to minimal water adsorption/absorption lightweight aggregate by rapid low temperature sintering: optimization by dual process-mixture response surface methodology.

    PubMed

    Velis, Costas A; Franco-Salinas, Claudia; O'Sullivan, Catherine; Najorka, Jens; Boccaccini, Aldo R; Cheeseman, Christopher R

    2014-07-01

    Mixed color waste glass extracted from municipal solid waste is either not recycled, in which case it is an environmental and financial liability, or it is used in relatively low value applications such as normal weight aggregate. Here, we report on converting it into a novel glass-ceramic lightweight aggregate (LWA), potentially suitable for high added value applications in structural concrete (upcycling). The artificial LWA particles were formed by rapidly sintering (<10 min) waste glass powder with clay mixes using sodium silicate as binder and borate salt as flux. Composition and processing were optimized using response surface methodology (RSM) modeling, and specifically (i) a combined process-mixture dual RSM, and (ii) multiobjective optimization functions. The optimization considered raw materials and energy costs. Mineralogical and physical transformations occur during sintering and a cellular vesicular glass-ceramic composite microstructure is formed, with strong correlations existing between bloating/shrinkage during sintering, density and water adsorption/absorption. The diametrical expansion could be effectively modeled via the RSM and controlled to meet a wide range of specifications; here we optimized for LWA structural concrete. The optimally designed LWA is sintered in comparatively low temperatures (825-835 °C), thus potentially saving costs and lowering emissions; it had exceptionally low water adsorption/absorption (6.1-7.2% w/wd; optimization target: 1.5-7.5% w/wd); while remaining substantially lightweight (density: 1.24-1.28 g.cm(-3); target: 0.9-1.3 g.cm(-3)). This is a considerable advancement for designing effective environmentally friendly lightweight concrete constructions, and boosting resource efficiency of waste glass flows.

  12. Optimization of Layer Densities for Spacecraft Multilayered Insulation Systems

    NASA Technical Reports Server (NTRS)

    Johnson, W. L.

    2009-01-01

    Numerous tests of various multilayer insulation systems have indicated that there are optimal densities for these systems. However, the only method of calculating this optimal density was by a complex physics based algorithm developed by McIntosh. In the 1970's much data were collected on the performance of these insulation systems with many different variables analyzed. All formulas generated included number of layers and layer density as geometric variables in solving for the heat flux, none of them was in a differentiable form for a single geometric variable. It was recently discovered that by converting the equations from heat flux to thermal conductivity using Fourier's Law, the equations became functions of layer density, temperatures, and material properties only. The thickness and number of layers of the blanket were merged into a layer density. These equations were then differentiated with respect to layer density. By setting the first derivative equal to zero, and solving for the layer density, the critical layer density was determined. Taking a second derivative showed that the critical layer density is a minimum in the function and thus the optimum density for minimal heat leak, this is confirmed by plotting the original function. This method was checked and validated using test data from the Multipurpose Hydrogen Testbed which was designed using McIntosh's algorithm.

  13. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    NASA Astrophysics Data System (ADS)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  14. Stability and Bifurcation of a Fishery Model with Crowley-Martin Functional Response

    NASA Astrophysics Data System (ADS)

    Maiti, Atasi Patra; Dubey, B.

    To understand the dynamics of a fishery system, a nonlinear mathematical model is proposed and analyzed. In an aquatic environment, we considered two populations: one is prey and another is predator. Here both the fish populations grow logistically and interaction between them is of Crowley-Martin type functional response. It is assumed that both the populations are harvested and the harvesting effort is assumed to be dynamical variable and tax is considered as a control variable. The existence of equilibrium points and their local stability are examined. The existence of Hopf-bifurcation, stability and direction of Hopf-bifurcation are also analyzed with the help of Center Manifold theorem and normal form theory. The global stability behavior of the positive equilibrium point is also discussed. In order to find the value of optimal tax, the optimal harvesting policy is used. To verify our analytical findings, an extensive numerical simulation is carried out for this model system.

  15. Current fluctuations in quantum absorption refrigerators

    NASA Astrophysics Data System (ADS)

    Segal, Dvira

    2018-05-01

    Absorption refrigerators transfer thermal energy from a cold bath to a hot bath without input power by utilizing heat from an additional "work" reservoir. Particularly interesting is a three-level design for a quantum absorption refrigerator, which can be optimized to reach the maximal (Carnot) cooling efficiency. Previous studies of three-level chillers focused on the behavior of the averaged cooling current. Here, we go beyond that and study the full counting statistics of heat exchange in a three-level chiller model. We explain how to obtain the complete cumulant generating function of the refrigerator in a steady state, then derive a partial cumulant generating function, which yields closed-form expressions for both the averaged cooling current and its noise. Our analytical results and simulations are beneficial for the design of nanoscale engines and cooling systems far from equilibrium, with their performance optimized according to different criteria, efficiency, power, fluctuations, and dissipation.

  16. On Reverse Stackelberg Game and Optimal Mean Field Control for a Large Population of Thermostatically Controlled Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Lian, Jianming

    This paper studies a multi-stage pricing problem for a large population of thermostatically controlled loads. The problem is formulated as a reverse Stackelberg game that involves a mean field game in the hierarchy of decision making. In particular, in the higher level, a coordinator needs to design a pricing function to motivate individual agents to maximize the social welfare. In the lower level, the individual utility maximization problem of each agent forms a mean field game coupled through the pricing function that depends on the average of the population control/state. We derive the solution to the reverse Stackelberg game bymore » connecting it to a team problem and the competitive equilibrium, and we show that this solution corresponds to the optimal mean field control that maximizes the social welfare. Realistic simulations are presented to validate the proposed methods.« less

  17. Heterologous expression of Trametes versicolor laccase in Saccharomyces cerevisiae.

    PubMed

    Iimura, Yosuke; Sonoki, Tomonori; Habe, Hiroshi

    2018-01-01

    Laccase is used in various industrial fields, and it has been the subject of numerous studies. Trametes versicolor laccase has one of the highest redox potentials among the various forms of this enzyme. In this study, we optimized the expression of laccase in Saccharomyces cerevisiae. Optimizing the culture conditions resulted in an improvement in the expression level, and approximately 45 U/L of laccase was functionally secreted in the culture. The recombinant laccase was found to be a heavily hypermannosylated glycoprotein, and the molecular weight of the carbohydrate chain was approximately 60 kDa. These hypermannosylated glycans lowered the substrate affinity, but the optimum pH and thermo-stability were not changed by these hypermannosylated glycans. This functional expression system described here will aid in molecular evolutionary studies conducted to generate new variants of laccase. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A Control Model: Interpretation of Fitts' Law

    NASA Technical Reports Server (NTRS)

    Connelly, E. M.

    1984-01-01

    The analytical results for several models are given: a first order model where it is assumed that the hand velocity can be directly controlled, and a second order model where it is assumed that the hand acceleration can be directly controlled. Two different types of control-laws are investigated. One is linear function of the hand error and error rate; the other is the time-optimal control law. Results show that the first and second order models with the linear control-law produce a movement time (MT) function with the exact form of the Fitts' Law. The control-law interpretation implies that the effect of target width on MT must be a result of the vertical motion which elevates the hand from the starting point and drops it on the target at the target edge. The time optimal control law did not produce a movement-time formula simular to Fitt's Law.

  19. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.

  20. Pivot method for global optimization: A study of structures and phase changes in water clusters

    NASA Astrophysics Data System (ADS)

    Nigra, Pablo Fernando

    In this thesis, we have carried out a study of water clusters. The research work has been developed in two stages. In the first stage, we have investigated the properties of water clusters at zero temperature by means of global optimization. The clusters were modeled by using two well known pairwise potentials having distinct characteristics. One is the Matsuoka-Clementi-Yoshimine potential (MCY) that is an ab initio fitted function based on a rigid-molecule model, the other is the Sillinger-Rahman potential (SR) which is an empirical function based on a flexible-molecule model. The algorithm used for the global optimization of the clusters was the pivot method, which was developed in our group. The results have shown that, under certain conditions, the pivot method may yield optimized structures which are related to one another in such a way that they seem to form structural families. The structures in a family can be thought of as formed from the aggregation of single units. The particular types of structures we have found are quasi-one dimensional tubes built from stacking cyclic units such as tetramers, pentamers, and hexamers. The binding energies of these tubes form sequences that span smooth curves with clear asymptotic behavior; therefore, we have also studied the sequences applying the Bulirsch-Stoer (BST) algorithm to accelerate convergence. In the second stage of the research work, we have studied the thermodynamic properties of a typical water cluster at finite temperatures. The selected cluster was the water octamer which exhibits a definite solid-liquid phase change. The water octamer also has several low lying energy cubic structures with large energetic barriers that cause ergodicity breaking in regular Monte Carlo simulations. For that reason we have simulated the octamer using paralell tempering Monte Carlo combined with the multihistogram method. This has permited us to calculate the heat capacity from very low temperatures up to T = 230 K. We have found the melting temperature to be 178.5 K. In addition, we have been able to estimate at 12 K the onset temperature of a solid-solid phase change between the two lowest energy lying isomers.

  1. Behavior of Halogen Bonds of the Y-X⋅⋅⋅π Type (X, Y=F, Cl, Br, I) in the Benzene π System, Elucidated by Using a Quantum Theory of Atoms in Molecules Dual-Functional Analysis.

    PubMed

    Sugibayashi, Yuji; Hayashi, Satoko; Nakanishi, Waro

    2016-08-18

    The nature of halogen bonds of the Y-X-✶-π(C6 H6 ) type (X, Y=F, Cl, Br, and I) have been elucidated by using the quantum theory of atoms in molecules (QTAIM) dual-functional analysis (QTAIM-DFA), which we proposed recently. Asterisks (✶) emphasize the presence of bond-critical points (BCPs) in the interactions in question. Total electron energy densities, Hb (rc ), are plotted versus Hb (rc )-Vb (rc )/2 [=(ħ(2) /8m)∇(2) ρb (rc )] for the interactions in QTAIM-DFA, in which Vb (rc ) are potential energy densities at the BCPs. Data for perturbed structures around fully optimized structures were used for the plots, in addition to those of the fully optimized ones. The plots were analyzed by using the polar (R, θ) coordinate for the data of fully optimized structures with (θp , κp ) for those that contained the perturbed structures; θp corresponds to the tangent line of the plot and κp is the curvature. Whereas (R, θ) corresponds to the static nature, (θp , κp ) represents the dynamic nature of the interactions. All interactions in Y-X-✶-π(C6 H6 ) are classified by pure closed-shell interactions and characterized to have vdW nature, except for Y-I-✶-π(C6 H6 ) (Y=F, Cl, Br) and F-Br-✶-π(C6 H6 ), which have typical hydrogen-bond nature without covalency. I-I-✶-π(C6 H6 ) has a borderline nature between the two. Y-F-✶-π(C6 H6 ) (Y=Br, I) were optimized as bent forms, in which Y-✶-π interactions were detected. The Y-✶-π interactions in the bent forms are predicted to be substantially weaker than those in the linear F-Y-✶-π(C6 H6 ) forms. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package.

    PubMed

    Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-28

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  3. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package

    NASA Astrophysics Data System (ADS)

    Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-01

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  4. Diamond-like phases formed from fullerene-like clusters

    NASA Astrophysics Data System (ADS)

    Belenkov, E. A.; Greshnyakov, V. A.

    2015-11-01

    The geometrically optimized structure and properties of thirteen diamond-like carbon phases formed by linking or combining fullerene-like clusters (C4, C6, C8, C12, C16, C24, or C48) have been investigated. Atoms in the structures of these phases are located in crystallographically equivalent positions. The calculations have been performed using the density functional theory in the generalized gradient approximation. The calculated values of the structural characteristics and properties (sublimation energies, bulk moduli, band gaps, X-ray diffraction patterns) of the studied diamond-like phases differ significantly from the corresponding values for cubic diamond.

  5. Koopman Invariant Subspaces and Finite Linear Representations of Nonlinear Dynamical Systems for Control.

    PubMed

    Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan

    2016-01-01

    In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.

  6. Self-optimizing charge-transfer energy phenomena in metallosupramolecular complexes by dynamic constitutional self-sorting.

    PubMed

    Legrand, Yves-Marie; van der Lee, Arie; Barboiu, Mihail

    2007-11-12

    In this paper we report an extended series of 2,6-(iminoarene)pyridine-type ZnII complexes [(Lii)2Zn]II, which were surveyed for their ability to self-exchange both their ligands and their aromatic arms and to form different homoduplex and heteroduplex complexes in solution. The self-sorting of heteroduplex complexes is likely to be the result of geometric constraints. Whereas the imine-exchange process occurs quantitatively in 1:1 mixtures of [(Lii)2Zn]II complexes, the octahedral coordination process around the metal ion defines spatial-frustrated exchanges that involve the selective formation of heterocomplexes of two, by two different substituents; the bulkiest ones (pyrene in principle) specifically interact with the pseudoterpyridine core, sterically hindering the least bulky ones, which are intermolecularly stacked with similar ligands of neighboring molecules. Such a self-sorting process defined by the specific self-constitution of the ligands exchanging their aromatic substituents is self-optimized by a specific control over their spatial orientation around a metal center within the complex. They ultimately show an improved charge-transfer energy function by virtue of the dynamic amplification of self-optimized heteroduplex architectures. These systems therefore illustrate the convergence of the combinatorial self-sorting of the dynamic combinatorial libraries (DCLs) strategy and the constitutional self-optimized function.

  7. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy

    PubMed Central

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator. PMID:27110274

  8. Optimization in Quaternion Dynamic Systems: Gradient, Hessian, and Learning Algorithms.

    PubMed

    Xu, Dongpo; Xia, Yili; Mandic, Danilo P

    2016-02-01

    The optimization of real scalar functions of quaternion variables, such as the mean square error or array output power, underpins many practical applications. Solutions typically require the calculation of the gradient and Hessian. However, real functions of quaternion variables are essentially nonanalytic, which are prohibitive to the development of quaternion-valued learning systems. To address this issue, we propose new definitions of quaternion gradient and Hessian, based on the novel generalized Hamilton-real (GHR) calculus, thus making a possible efficient derivation of general optimization algorithms directly in the quaternion field, rather than using the isomorphism with the real domain, as is current practice. In addition, unlike the existing quaternion gradients, the GHR calculus allows for the product and chain rule, and for a one-to-one correspondence of the novel quaternion gradient and Hessian with their real counterparts. Properties of the quaternion gradient and Hessian relevant to numerical applications are also introduced, opening a new avenue of research in quaternion optimization and greatly simplified the derivations of learning algorithms. The proposed GHR calculus is shown to yield the same generic algorithm forms as the corresponding real- and complex-valued algorithms. Advantages of the proposed framework are illuminated over illustrative simulations in quaternion signal processing and neural networks.

  9. Computational screening of functional groups for capture of toxic industrial chemicals in porous materials.

    PubMed

    Kim, Ki Chul; Fairen-Jimenez, David; Snurr, Randall Q

    2017-12-06

    A thermodynamic analysis using quantum chemical methods was carried out to identify optimal functional group candidates that can be included in metal-organic frameworks and activated carbons for the selective capture of toxic industrial chemicals (TICs) in humid air. We calculated the binding energies of 14 critical TICs plus water with a series of 10 functional groups attached to a naphthalene ring model. Using vibrational calculations, the free energies of adsorption were calculated in addition to the binding energies. Our results show that, in these systems, the binding energies and free energies follow similar trends. We identified copper(i) carboxylate as the optimal functional group (among those studied) for the selective binding of the majority of the TICs in humid air, and this functional group exhibits especially strong binding for sulfuric acid. Further thermodynamic analysis shows that the presence of water weakens the binding strength of sulfuric acid with the copper carboxylate group. Our calculations predict that functionalization of aromatic rings would be detrimental to selective capture of COCl 2 , CO 2 , and Cl 2 under humid conditions. Finally, we found that forming an ionic complex, H 3 O + HSO 4 - , between H 2 SO 4 and H 2 O via proton transfer is not favorable on copper carboxylate.

  10. Quantifying uncertainty in partially specified biological models: how can optimal control theory help us?

    PubMed

    Adamson, M W; Morozov, A Y; Kuzenkov, O A

    2016-09-01

    Mathematical models in biology are highly simplified representations of a complex underlying reality and there is always a high degree of uncertainty with regards to model function specification. This uncertainty becomes critical for models in which the use of different functions fitting the same dataset can yield substantially different predictions-a property known as structural sensitivity. Thus, even if the model is purely deterministic, then the uncertainty in the model functions carries through into uncertainty in model predictions, and new frameworks are required to tackle this fundamental problem. Here, we consider a framework that uses partially specified models in which some functions are not represented by a specific form. The main idea is to project infinite dimensional function space into a low-dimensional space taking into account biological constraints. The key question of how to carry out this projection has so far remained a serious mathematical challenge and hindered the use of partially specified models. Here, we propose and demonstrate a potentially powerful technique to perform such a projection by using optimal control theory to construct functions with the specified global properties. This approach opens up the prospect of a flexible and easy to use method to fulfil uncertainty analysis of biological models.

  11. Variational Calculation of the Ground State of Closed-Shell Nuclei Up to $A$ = 40

    DOE PAGES

    Lonardoni, Diego; Lovato, Alessandro; Pieper, Steven C.; ...

    2017-08-31

    Variational calculations of ground-state properties of 4He, 16O and 40Ca are carried out employing realistic phenomenological two- and three-nucleon potentials. The trial wave function includes twoand three-body correlations acting on a product of single-particle determinants. Expectation values are evaluated with a cluster expansion for the spin-isospin dependent correlations considering up to five-body cluster terms. The optimal wave function is obtained by minimizing the energy expectation value over a set of up to 20 parameters by means of a nonlinear optimization library. We present results for the binding energy, charge radius, point density, single-nucleon momentum distribution, charge form factor, and Coulombmore » sum rule. We find that the employed three-nucleon interaction becomes repulsive for A ≥ 16. In 16O the inclusion of such a force provides a better description of the properties of the nucleus. In 40Ca instead, the repulsive behavior of the three-body interaction fails to reproduce experimental data for the charge radius and the charge form factor. We find that the high-momentum region of the momentum distributions, determined by the short-range terms of nuclear correlations, exhibit a universal behavior independent of the particular nucleus. The comparison of the Coulomb sum rules for 4He, 16O, and 40Ca reported in this work will help elucidate in-medium modifications of the nucleon form factors.« less

  12. Performance analysis of complex repairable industrial systems using PSO and fuzzy confidence interval based methodology.

    PubMed

    Garg, Harish

    2013-03-01

    The main objective of the present paper is to propose a methodology for analyzing the behavior of the complex repairable industrial systems. In real-life situations, it is difficult to find the most optimal design policies for MTBF (mean time between failures), MTTR (mean time to repair) and related costs by utilizing available resources and uncertain data. For this, the availability-cost optimization model has been constructed for determining the optimal design parameters for improving the system design efficiency. The uncertainties in the data related to each component of the system are estimated with the help of fuzzy and statistical methodology in the form of the triangular fuzzy numbers. Using these data, the various reliability parameters, which affects the system performance, are obtained in the form of the fuzzy membership function by the proposed confidence interval based fuzzy Lambda-Tau (CIBFLT) methodology. The computed results by CIBFLT are compared with the existing fuzzy Lambda-Tau methodology. Sensitivity analysis on the system MTBF has also been addressed. The methodology has been illustrated through a case study of washing unit, the main part of the paper industry. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Optimal Information Processing in Biochemical Networks

    NASA Astrophysics Data System (ADS)

    Wiggins, Chris

    2012-02-01

    A variety of experimental results over the past decades provide examples of near-optimal information processing in biological networks, including in biochemical and transcriptional regulatory networks. Computing information-theoretic quantities requires first choosing or computing the joint probability distribution describing multiple nodes in such a network --- for example, representing the probability distribution of finding an integer copy number of each of two interacting reactants or gene products while respecting the `intrinsic' small copy number noise constraining information transmission at the scale of the cell. I'll given an overview of some recent analytic and numerical work facilitating calculation of such joint distributions and the associated information, which in turn makes possible numerical optimization of information flow in models of noisy regulatory and biochemical networks. Illustrating cases include quantification of form-function relations, ideal design of regulatory cascades, and response to oscillatory driving.

  14. Aerodynamic shape optimization of a HSCT type configuration with improved surface definition

    NASA Technical Reports Server (NTRS)

    Thomas, Almuttil M.; Tiwari, Surendra N.

    1994-01-01

    Two distinct parametrization procedures of generating free-form surfaces to represent aerospace vehicles are presented. The first procedure is the representation using spline functions such as nonuniform rational b-splines (NURBS) and the second is a novel (geometrical) parametrization using solutions to a suitably chosen partial differential equation. The main idea is to develop a surface which is more versatile and can be used in an optimization process. Unstructured volume grid is generated by an advancing front algorithm and solutions obtained using an Euler solver. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an automatic differentiator precompiler software tool. Aerodynamic shape optimization of a complete aircraft with twenty four design variables is performed. High speed civil transport aircraft (HSCT) configurations are targeted to demonstrate the process.

  15. Pericardial application as a new route for implanting stem-cell cardiospheres to treat myocardial infarction.

    PubMed

    Zhang, Jianhua; Wu, Zheng; Fan, Zepei; Qin, Zixi; Wang, Yingwei; Chen, Jiayuan; Wu, Maoxiong; Chen, Yangxin; Wu, Changhao; Wang, Jingfeng

    2018-06-01

    Cardiospheres (CSps) are a promising new form of cardiac stem cells with advantage over other stem cells for myocardial regeneration, but direct implantation of CSps by conventional routes has been limited due to potential embolism. We have implanted CSps into the pericardial cavity and systematically demonstrated its efficacy regarding myocardial infarction. Stem cell potency and cell viability can be optimized in vitro prior to implantation by pre-conditioning CSps with pericardial fluid and hydrogel packing. Transplantation of optimized CSps into the pericardial cavity improved cardiac function and alleviated myocardial fibrosis, increased myocardial cell survival and promoted angiogenesis. Mechanistically, CSps are able to directly differentiate into cardiomyocytes in vivo and promote regeneration of myocardial cells and blood vessels through a paracrine effect with released growth factors as potential paracrine mediators. These findings establish a new strategy for therapeutic myocardial regeneration to treat myocardial infarction. Cardiospheres (CSps) are a new form of cardiac stem cells with an advantage over other stem cells for myocardial regeneration. However, direct implantation of CSps by conventional routes to treat myocardial infarction has been limited due to potential embolism. We have implanted CSps into the pericardial cavity and systematically assessed its efficacy on myocardial infarction. Preconditioning with pericardial fluid enhanced the activity of CSps and matrix hydrogel prolonged their viability. This shows that pretransplant optimization of stem cell potency and maintenance of cell viability can be achieved with CSps. Transplantation of optimized CSps into the pericardial cavity improved cardiac function and alleviated myocardial fibrosis in the non-infarcted area, and increased myocardial cell survival and promoted angiogenesis in the infarcted area. Mechanistically, CSps were able to directly differentiate into cardiomyocytes in vivo and promoted regeneration of myocardial cells and blood vessels in the infarcted area through a paracrine effect with released growth factors in pericardial cavity serving as possible paracrine mediators. This is the first demonstration of direct pericardial administration of pre-optimized CSps, and its effectiveness on myocardial infarction by functional and morphological outcomes with distinct mechanisms. These findings establish a new strategy for therapeutic myocardial regeneration to treat myocardial infarction. © 2018 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  16. INFORMS Section on Location Analysis Dissertation Award Submission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waddell, Lucas

    This research effort can be summarized by two main thrusts, each of which has a chapter of the dissertation dedicated to it. First, I pose a novel polyhedral approach for identifying polynomially solvable in- stances of the QAP based on an application of the reformulation-linearization technique (RLT), a general procedure for constructing mixed 0-1 linear reformulations of 0-1 pro- grams. The feasible region to the continuous relaxation of the level-1 RLT form is a polytope having a highly specialized structure. Every binary solution to the QAP is associated with an extreme point of this polytope, and the objective function valuemore » is preserved at each such point. However, there exist extreme points that do not correspond to binary solutions. The key insight is a previously unnoticed and unexpected relationship between the polyhedral structure of the continuous relaxation of the level-1 RLT representation and various classes of readily solvable instances. Specifically, we show that a variety of apparently unrelated solvable cases of the QAP can all be categorized in the following sense: each such case has an objective function which ensures that an optimal solution to the continuous relaxation of the level-1 RLT form occurs at a binary extreme point. Interestingly, there exist instances that are solvable by the level-1 RLT form which do not satisfy the conditions of these cases, so that the level-1 form theoretically identifies a richer family of solvable instances. Second, I focus on instances of the QAP known in the literature as linearizable. An instance of the QAP is defined to be linearizable if and only if the problem can be equivalently written as a linear assignment problem that preserves the objective function value at all feasible solutions. I provide an entirely new polyheral-based perspective on the concept of linearizable by showing that an instance of the QAP is linearizable if and only if a relaxed version of the continuous relaxation of the level-1 RLT form is bounded. We also shows that the level-1 RLT form can identify a richer family of solvable instances than those deemed linearizable by demonstrating that the continuous relaxation of the level-1 RLT form can have an optimal binary solution for instances that are not linearizable. As a byproduct, I use this theoretical framework to explicity, in closed form, characterize the dimensions of the level-1 RLT form and various other problem relaxations.« less

  17. Computing the Partition Function for Kinetically Trapped RNA Secondary Structures

    PubMed Central

    Lorenz, William A.; Clote, Peter

    2011-01-01

    An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in time and space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures – indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy. Web server and source code available at http://bioinformatics.bc.edu/clotelab/RNAlocopt/. PMID:21297972

  18. Compact two-electron wave function for bond dissociation and Van der Waals interactions: a natural amplitude assessment.

    PubMed

    Giesbertz, Klaas J H; van Leeuwen, Robert

    2014-05-14

    Electron correlations in molecules can be divided in short range dynamical correlations, long range Van der Waals type interactions, and near degeneracy static correlations. In this work, we analyze for a one-dimensional model of a two-electron system how these three types of correlations can be incorporated in a simple wave function of restricted functional form consisting of an orbital product multiplied by a single correlation function f (r12) depending on the interelectronic distance r12. Since the three types of correlations mentioned lead to different signatures in terms of the natural orbital (NO) amplitudes in two-electron systems, we make an analysis of the wave function in terms of the NO amplitudes for a model system of a diatomic molecule. In our numerical implementation, we fully optimize the orbitals and the correlation function on a spatial grid without restrictions on their functional form. Due to this particular form of the wave function, we can prove that none of the amplitudes vanishes and moreover that it displays a distinct sign pattern and a series of avoided crossings as a function of the bond distance in agreement with the exact solution. This shows that the wave function ansatz correctly incorporates the long range Van der Waals interactions. We further show that the approximate wave function gives an excellent binding curve and is able to describe static correlations. We show that in order to do this the correlation function f (r12) needs to diverge for large r12 at large internuclear distances while for shorter bond distances it increases as a function of r12 to a maximum value after which it decays exponentially. We further give a physical interpretation of this behavior.

  19. DFT and experimental studies of the structure and vibrational spectra of curcumin

    NASA Astrophysics Data System (ADS)

    Kolev, Tsonko M.; Velcheva, Evelina A.; Stamboliyska, Bistra A.; Spiteller, Michael

    The potential energy surface of curcumin [1,7-bis(4-hydroxy-3-methoxyphenyl)-1,6-heptadiene-3,5-dione] was explored with the DFT correlation functional B3LYP method using 6-311G* basis. The single-point calculations were performed at levels up to B3LYP/6-311++G**//B3LYP/6-311G*. All isomers were located and relative energies determined. According to the calculation the planar enol form is more stable than the nonplanar diketo form. The results of the optimized molecular structure are presented and compared with the experimental X-ray diffraction. In addition, harmonic vibrational frequencies of the molecule were evaluated theoretically using B3LYP density functional methods. The computed vibrational frequencies were used to determine the types of molecular motions associated with each of the experimental bands observed. Our vibrational data show that in both the solid state and in all studied solutions curcumin exists in the enol form.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yusoff, Yusriha Mohd; Salimi, Midhat Nabil Ahmad; Anuar, Adilah

    Many studies have been carried out in order to prepare hydroxyapatite (HAp) by various methods. In this study, we focused on the preparation of HAp nanoparticles by using sol-gel technique in which few parameters are optimized which were stirring rate, aging time and sintering temperature. HAp nanoparticles were prepared by using precursors of calcium nitrate tetrahydrate, Ca(NO{sub 3}){sub 2}.4H{sub 2}O and phosphorous pentoxide, P{sub 2}O{sub 5}. Both precursors are mixed in ethanol respectively before they were mixed together in which it formed a stable sol. Fourier transform infrared (FTIR), X-ray diffraction (XRD) and Scanning electron microscopy (SEM) were used formore » its characterization in terms of functional group, phase composition, crystallite size and morphology of the nanoparticles produced. FTIR spectra showed that the functional groups that present in all five samples were corresponding to the formation of HAp. Besides, XRD shows that only one phase was formed which was hydroxyapatite. Meanwhile, SEM shows that the small particles combine together to form agglomeration.« less

  1. Preparation of hydroxyapatite nanoparticles by sol-gel method with optimum processing parameters

    NASA Astrophysics Data System (ADS)

    Yusoff, Yusriha Mohd; Salimi, Midhat Nabil Ahmad; Anuar, Adilah

    2015-05-01

    Many studies have been carried out in order to prepare hydroxyapatite (HAp) by various methods. In this study, we focused on the preparation of HAp nanoparticles by using sol-gel technique in which few parameters are optimized which were stirring rate, aging time and sintering temperature. HAp nanoparticles were prepared by using precursors of calcium nitrate tetrahydrate, Ca(NO3)2.4H2O and phosphorous pentoxide, P2O5. Both precursors are mixed in ethanol respectively before they were mixed together in which it formed a stable sol. Fourier transform infrared (FTIR), X-ray diffraction (XRD) and Scanning electron microscopy (SEM) were used for its characterization in terms of functional group, phase composition, crystallite size and morphology of the nanoparticles produced. FTIR spectra showed that the functional groups that present in all five samples were corresponding to the formation of HAp. Besides, XRD shows that only one phase was formed which was hydroxyapatite. Meanwhile, SEM shows that the small particles combine together to form agglomeration.

  2. SAR-based change detection using hypothesis testing and Markov random field modelling

    NASA Astrophysics Data System (ADS)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  3. Formation Flight System Extremum-Seeking-Control Using Blended Performance Parameters

    NASA Technical Reports Server (NTRS)

    Ryan, John J. (Inventor)

    2018-01-01

    An extremum-seeking control system for formation flight that uses blended performance parameters in a conglomerate performance function that better approximates drag reduction than performance functions formed from individual measurements. Generally, a variety of different measurements are taken and fed to a control system, the measurements are weighted, and are then subjected to a peak-seeking control algorithm. As measurements are continually taken, the aircraft will be guided to a relative position which optimizes the drag reduction of the formation. Two embodiments are discussed. Two approaches are shown for determining relative weightings: "a priori" by which they are qualitatively determined (by minimizing the error between the conglomerate function and the drag reduction function), and by periodically updating the weightings as the formation evolves.

  4. Hydrogen bonding in malonaldehyde: a density functional and reparametrized semiempirical approach

    NASA Astrophysics Data System (ADS)

    Kovačević, Goran; Hrenar, Tomica; Došlić, Nadja

    2003-08-01

    Intramolecular proton transfer in malonaldehyde (MA) has been investigated by density functional theory (DFT). The DFT results were used for the construction of a high quality semiempirical potential energy surface with a reparametrized PM3 Hamiltonian. A two-step reparameterization procedure is proposed in which (i) the PM3-MAIS core-core functions for the O-H and H-H interactions were used and a new functional form for the O-O correction function was proposed and (ii) a set of specific reaction parameters (SRP) has been obtained via genetic algorithm optimization. The quality of the reparametrized semiempirical potential energy surfaces was tested by calculating the tunneling splitting of vibrational levels and the anharmonic vibrational frequencies of the system. The applicability to multi-dimensional dynamics in large molecular systems is discussed.

  5. Precise side-chain conformation analysis of L-phenylalanine in α-helical polypeptide by quantum-chemical calculation and 13C CP-MAS NMR measurement

    NASA Astrophysics Data System (ADS)

    Niimura, Subaru; Suzuki, Junya; Kurosu, Hiromichi; Yamanobe, Takeshi; Shoji, Akira

    2010-04-01

    To clarify the positive role of side-chain conformation in the stability of protein secondary structure (main-chain conformation), we successfully calculated the optimization structure of a well-defined α-helical octadecapeptide composed of L-alanine (Ala) and L-phenylalanine (Phe) residues, H-(Ala) 8-Phe-(Ala) 9-OH, based on the molecular orbital calculation with density functional theory (DFT/B3LYP/6-31G(d)). From the total energy and the precise secondary structural parameters such as main-chain dihedral angles and hydrogen-bond parameters of the optimized structure, we confirmed that the conformational stability of an α-helix is affected dominantly by the side-chain conformation ( χ1) of the Phe residue in this system: model A ( T form: around 180° of χ1) is most stable in α-helix and model B ( G + form: around -60° of χ1) is next stable, but model C ( G - form: around 60° of χ1) is less stable. In addition, we demonstrate that the stable conformation of poly( L-phenylalanine) is an α-helix with the side-chain T form, by comparison of the carbonyl 13C chemical shift measured by 13C CP-MAS NMR and the calculated one.

  6. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  7. Multiple local feature representations and their fusion based on an SVR model for iris recognition using optimized Gabor filters

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing

    2014-12-01

    Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.

  8. Mathematical simulation and optimization of cutting mode in turning of workpieces made of nickel-based heat-resistant alloy

    NASA Astrophysics Data System (ADS)

    Bogoljubova, M. N.; Afonasov, A. I.; Kozlov, B. N.; Shavdurov, D. E.

    2018-05-01

    A predictive simulation technique of optimal cutting modes in the turning of workpieces made of nickel-based heat-resistant alloys, different from the well-known ones, is proposed. The impact of various factors on the cutting process with the purpose of determining optimal parameters of machining in concordance with certain effectiveness criteria is analyzed in the paper. A mathematical model of optimization, algorithms and computer programmes, visual graphical forms reflecting dependences of the effectiveness criteria – productivity, net cost, and tool life on parameters of the technological process - have been worked out. A nonlinear model for multidimensional functions, “solution of the equation with multiple unknowns”, “a coordinate descent method” and heuristic algorithms are accepted to solve the problem of optimization of cutting mode parameters. Research shows that in machining of workpieces made from heat-resistant alloy AISI N07263, the highest possible productivity will be achieved with the following parameters: cutting speed v = 22.1 m/min., feed rate s=0.26 mm/rev; tool life T = 18 min.; net cost – 2.45 per hour.

  9. Subthreshold SPICE Model Optimization

    NASA Astrophysics Data System (ADS)

    Lum, Gregory; Au, Henry; Neff, Joseph; Bozeman, Eric; Kamin, Nick; Shimabukuro, Randy

    2011-04-01

    The first step in integrated circuit design is the simulation of said design in software to verify proper functionally and design requirements. Properties of the process are provided by fabrication foundries in the form of SPICE models. These SPICE models contain the electrical data and physical properties of the basic circuit elements. A limitation of these models is that the data collected by the foundry only accurately model the saturation region. This is fine for most users, but when operating devices in the subthreshold region they are inadequate for accurate simulation results. This is why optimizing the current SPICE models to characterize the subthreshold region is so important. In order to accurately simulate this region of operation, MOSFETs of varying widths and lengths are fabricated and the electrical test data is collected. From the data collected the parameters of the model files are optimized through parameter extraction rather than curve fitting. With the completed optimized models the circuit designer is able to simulate circuit designs for the sub threshold region accurately.

  10. Efficient Online Optimized Quantum Control for Adiabatic Quantum Computation

    NASA Astrophysics Data System (ADS)

    Quiroz, Gregory

    Adiabatic quantum computation (AQC) relies on controlled adiabatic evolution to implement a quantum algorithm. While control evolution can take many forms, properly designed time-optimal control has been shown to be particularly advantageous for AQC. Grover's search algorithm is one such example where analytically-derived time-optimal control leads to improved scaling of the minimum energy gap between the ground state and first excited state and thus, the well-known quadratic quantum speedup. Analytical extensions beyond Grover's search algorithm present a daunting task that requires potentially intractable calculations of energy gaps and a significant degree of model certainty. Here, an in situ quantum control protocol is developed for AQC. The approach is shown to yield controls that approach the analytically-derived time-optimal controls for Grover's search algorithm. In addition, the protocol's convergence rate as a function of iteration number is shown to be essentially independent of system size. Thus, the approach is potentially scalable to many-qubit systems.

  11. Fractional two-compartmental model for articaine serum levels

    NASA Astrophysics Data System (ADS)

    Petronijevic, Branislava; Sarcev, Ivan; Zorica, Dusan; Janev, Marko; Atanackovic, Teodor M.

    2016-06-01

    Two fractional two-compartmental models are applied to the pharmacokinetics of articaine. Integer order derivatives are replaced by fractional derivatives, either of different, or of same orders. Models are formulated so that the mass balance is preserved. Explicit forms of the solutions are obtained in terms of the Mittag-Leffler functions. Pharmacokinetic parameters are determined by the use of the evolutionary algorithm and trust regions optimization to recover the experimental data.

  12. Preclinical Models in Vascularized Composite Allotransplantation

    DTIC Science & Technology

    2015-06-28

    VCA) has the potential to reconstruct any non-visceral tissue defect, using like for like tissue, delivering optimal form and function. Over 150 VCA... Reconstructive transplantation Introduction To date, over 150 VCA transplants have been performed,most commonly of the hand and face, but also abdominal wall...larynx, lower limb, uterus and penis [1, 2]. Any non-visceral tissue defect can potentially be reconstructed in this manner using like for like tissue

  13. Study on loading path optimization of internal high pressure forming process

    NASA Astrophysics Data System (ADS)

    Jiang, Shufeng; Zhu, Hengda; Gao, Fusheng

    2017-09-01

    In the process of internal high pressure forming, there is no formula to describe the process parameters and forming results. The article use numerical simulation to obtain several input parameters and corresponding output result, use the BP neural network to found their mapping relationship, and with weighted summing method make each evaluating parameters to set up a formula which can evaluate quality. Then put the training BP neural network into the particle swarm optimization, and take the evaluating formula of the quality as adapting formula of particle swarm optimization, finally do the optimization and research at the range of each parameters. The results show that the parameters obtained by the BP neural network algorithm and the particle swarm optimization algorithm can meet the practical requirements. The method can solve the optimization of the process parameters in the internal high pressure forming process.

  14. Cloning strategy for producing brush-forming protein-based polymers.

    PubMed

    Henderson, Douglas B; Davis, Richey M; Ducker, William A; Van Cott, Kevin E

    2005-01-01

    Brush-forming polymers are being used in a variety of applications, and by using recombinant DNA technology, there exists the potential to produce protein-based polymers that incorporate unique structures and functions in these brush layers. Despite this potential, production of protein-based brush-forming polymers is not routinely performed. For the design and production of new protein-based polymers with optimal brush-forming properties, it would be desirable to have a cloning strategy that allows an iterative approach wherein the protein based-polymer product can be produced and evaluated, and then if necessary, it can be sequentially modified in a controlled manner to obtain optimal surface density and brush extension. In this work, we report on the development of a cloning strategy intended for the production of protein-based brush-forming polymers. This strategy is based on the assembly of modules of DNA that encode for blocks of protein-based polymers into a commercially available expression vector; there is no need for custom-modified vectors and no need for intermediate cloning vectors. Additionally, because the design of new protein-based biopolymers can be an iterative process, our method enables sequential modification of a protein-based polymer product. With at least 21 bacterial expression vectors and 11 yeast expression vectors compatible with this strategy, there are a number of options available for production of protein-based polymers. It is our intent that this strategy will aid in advancing the production of protein-based brush-forming polymers.

  15. The multifacet graphically contracted function method. I. Formulation and implementation

    NASA Astrophysics Data System (ADS)

    Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R.

    2014-08-01

    The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N2n4) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N2 dissociation, cubic H8 dissociation, the symmetric dissociation of H2O, and the insertion of Be into H2. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.

  16. The multifacet graphically contracted function method. I. Formulation and implementation.

    PubMed

    Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R

    2014-08-14

    The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N(2)n(4)) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N2 dissociation, cubic H8 dissociation, the symmetric dissociation of H2O, and the insertion of Be into H2. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.

  17. Hull Form Design and Optimization Tool Development

    DTIC Science & Technology

    2012-07-01

    global minimum. The algorithm accomplishes this by using a method known as metaheuristics which allows the algorithm to examine a large area by...further development of these tools including the implementation and testing of a new optimization algorithm , the improvement of a rapid hull form...under the 2012 Naval Research Enterprise Intern Program. 15. SUBJECT TERMS hydrodynamic, hull form, generation, optimization, algorithm

  18. Computer modeling of a two-junction, monolithic cascade solar cell

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.; Abbott, D.

    1979-01-01

    The theory and design criteria for monolithic, two-junction cascade solar cells are described. The departure from the conventional solar cell analytical method and the reasons for using the integral form of the continuity equations are briefly discussed. The results of design optimization are presented. The energy conversion efficiency that is predicted for the optimized structure is greater than 30% at 300 K, AMO and one sun. The analytical method predicts device performance characteristics as a function of temperature. The range is restricted to 300 to 600 K. While the analysis is capable of determining most of the physical processes occurring in each of the individual layers, only the more significant device performance characteristics are presented.

  19. Development of new smart materials and spinning systems inspired by natural silks and their applications

    NASA Astrophysics Data System (ADS)

    Cheng, Jie; Lee, Sang-Hoon

    2015-12-01

    Silks produced by spiders and silkworms are charming natural biological materials with highly optimized hierarchical structures and outstanding physicomechanical properties. The superior performance of silks relies on the integration of a unique protein sequence, a distinctive spinning process, and complex hierarchical structures. Silks have been prepared to form a variety of morphologies and are widely used in diverse applications, for example, in the textile industry, as drug delivery vehicles, and as tissue engineering scaffolds. This review presents an overview of the organization of natural silks, in which chemical and physical functions are optimized, as well as a range of new materials inspired by the desire to mimic natural silk structure and synthesis.

  20. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    USGS Publications Warehouse

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  1. Simulations of Metallic Nanoscale Structures

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    2003-03-01

    Density-functional-theory calculations can be used to understand and predict materials properties based on their nanoscale composition and structure. In combination with efficient search algorithms DFT can furthermore be applied in the nanoscale design of optimized materials. The first part of the talk will focus on two different types of nanostructures with an interesting interplay between chemical activity and conducting states. MoS2 nanoclusters are known for their catalyzing effect in the hydrodesulfurization process which removes sulfur-containing molecules from oil products. MoS2 is a layered material which is insulating. However, DFT calculations indicates the exsistence of metallic states at some of the edges of MoS2 nanoclusters, and the calculations show that the conducting states are not passivated by for example the presence of hydrogen gas. The edge states may play an important role for the chemical activity of MoS_2. Metallic nanocontacts can be formed during the breaking of a piece of metal, and atomically thin structures with conductance of only a single quantum unit may be formed. Such open metallic structures are chemically very active and susceptible to restructuring through interactions with molecular gases. DFT calculations show for example that atomically thin gold wires may incorporate oxygen atoms forming a new type of metallic nanowire. Adsorbates like hydrogen may also affect the conductance. In the last part of the talk I shall discuss the possibilities for designing alloys with optimal mechanical properties based on a combination of DFT calculations with genetic search algorithms. Simulaneous optimization of several parameters (stability, price, compressibility) is addressed through the determination of Pareto optimal alloy compositions within a large database of more than 64000 alloys.

  2. Network-wide reorganization of procedural memory during NREM sleep revealed by fMRI

    PubMed Central

    Vahdat, Shahabeddin; Fogel, Stuart; Benali, Habib; Doyon, Julien

    2017-01-01

    Sleep is necessary for the optimal consolidation of newly acquired procedural memories. However, the mechanisms by which motor memory traces develop during sleep remain controversial in humans, as this process has been mainly investigated indirectly by comparing pre- and post-sleep conditions. Here, we used functional magnetic resonance imaging and electroencephalography during sleep following motor sequence learning to investigate how newly-formed memory traces evolve dynamically over time. We provide direct evidence for transient reactivation followed by downscaling of functional connectivity in a cortically-dominant pattern formed during learning, as well as gradual reorganization of this representation toward a subcortically-dominant consolidated trace during non-rapid eye movement (NREM) sleep. Importantly, the putamen functional connectivity within the consolidated network during NREM sleep was related to overnight behavioral gains. Our results demonstrate that NREM sleep is necessary for two complementary processes: the restoration and reorganization of newly-learned information during sleep, which underlie human motor memory consolidation. DOI: http://dx.doi.org/10.7554/eLife.24987.001 PMID:28892464

  3. A minimization method on the basis of embedding the feasible set and the epigraph

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.

    2016-11-01

    We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.

  4. Automated Optimization of Potential Parameters

    PubMed Central

    Michele, Di Pierro; Ron, Elber

    2013-01-01

    An algorithm and software to refine parameters of empirical energy functions according to condensed phase experimental measurements are discussed. The algorithm is based on sensitivity analysis and local minimization of the differences between experiment and simulation as a function of potential parameters. It is illustrated for a toy problem of alanine dipeptide and is applied to folding of the peptide WAAAH. The helix fraction is highly sensitive to the potential parameters while the slope of the melting curve is not. The sensitivity variations make it difficult to satisfy both observations simultaneously. We conjecture that there is no set of parameters that reproduces experimental melting curves of short peptides that are modeled with the usual functional form of a force field. PMID:24015115

  5. Characteristics and functionality of appetite-reducing thylakoid powders produced by three different drying processes.

    PubMed

    Östbring, Karolina; Sjöholm, Ingegerd; Sörenson, Henrietta; Ekholm, Andrej; Erlanson-Albertsson, Charlotte; Rayner, Marilyn

    2018-03-01

    Thylakoids, a chloroplast membrane extracted from green leaves, are a promising functional ingredient with appetite-reducing properties via their lipase-inhibiting effect. Thylakoids in powder form have been evaluated in animal and human models, but no comprehensive study has been conducted on powder characteristics. The aim was to investigate the effects of different isolation methods and drying techniques (drum-drying, spray-drying, freeze-drying) on thylakoids' physicochemical and functional properties. Freeze-drying yielded thylakoid powders with the highest lipase-inhibiting capacity. We hypothesize that the specific macromolecular structures involved in lipase inhibition were degraded to different degrees by exposure to heat during spray-drying and drum-drying. We identified lightness (Hunter's L-value), greenness (Hunter's a-value), chlorophyll content and emulsifying capacity to be correlated to lipase-inhibiting capacity. Thus, to optimize the thylakoids functional properties, the internal membrane structure indicated by retained green colour should be preserved. This opens possibilities to use chlorophyll content as a marker for thylakoid functionality in screening processes during process optimization. Thylakoids are heat sensitive, and a mild drying technique should be used in industrial production. Strong links between physicochemical parameters and lipase inhibition capacity were found that can be used to predict functionality. The approach from this study can be applied towards production of standardized high-quality functional food ingredients. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  6. Modular assembly of thick multifunctional cardiac patches

    PubMed Central

    Fleischer, Sharon; Shapira, Assaf; Feiner, Ron; Dvir, Tal

    2017-01-01

    In cardiac tissue engineering cells are seeded within porous biomaterial scaffolds to create functional cardiac patches. Here, we report on a bottom-up approach to assemble a modular tissue consisting of multiple layers with distinct structures and functions. Albumin electrospun fiber scaffolds were laser-patterned to create microgrooves for engineering aligned cardiac tissues exhibiting anisotropic electrical signal propagation. Microchannels were patterned within the scaffolds and seeded with endothelial cells to form closed lumens. Moreover, cage-like structures were patterned within the scaffolds and accommodated poly(lactic-co-glycolic acid) (PLGA) microparticulate systems that controlled the release of VEGF, which promotes vascularization, or dexamethasone, an anti-inflammatory agent. The structure, morphology, and function of each layer were characterized, and the tissue layers were grown separately in their optimal conditions. Before transplantation the tissue and microparticulate layers were integrated by an ECM-based biological glue to form thick 3D cardiac patches. Finally, the patches were transplanted in rats, and their vascularization was assessed. Because of the simple modularity of this approach, we believe that it could be used in the future to assemble other multicellular, thick, 3D, functional tissues. PMID:28167795

  7. Sleep, Memory & Brain Rhythms

    PubMed Central

    Watson, Brendon O.; Buzsáki, György

    2015-01-01

    Sleep occupies roughly one-third of our lives, yet the scientific community is still not entirely clear on its purpose or function. Existing data point most strongly to its role in memory and homeostasis: that sleep helps maintain basic brain functioning via a homeostatic mechanism that loosens connections between overworked synapses, and that sleep helps consolidate and re-form important memories. In this review, we will summarize these theories, but also focus on substantial new information regarding the relation of electrical brain rhythms to sleep. In particular, while REM sleep may contribute to the homeostatic weakening of overactive synapses, a prominent and transient oscillatory rhythm called “sharp-wave ripple” seems to allow for consolidation of behaviorally relevant memories across many structures of the brain. We propose that a theory of sleep involving the division of labor between two states of sleep–REM and non-REM, the latter of which has an abundance of ripple electrical activity–might allow for a fusion of the two main sleep theories. This theory then postulates that sleep performs a combination of consolidation and homeostasis that promotes optimal knowledge retention as well as optimal waking brain function. PMID:26097242

  8. Normal and abnormal tissue identification system and method for medical images such as digital mammograms

    NASA Technical Reports Server (NTRS)

    Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)

    2001-01-01

    A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.

  9. Sleep, Memory & Brain Rhythms.

    PubMed

    Watson, Brendon O; Buzsáki, György

    2015-01-01

    Sleep occupies roughly one-third of our lives, yet the scientific community is still not entirely clear on its purpose or function. Existing data point most strongly to its role in memory and homeostasis: that sleep helps maintain basic brain functioning via a homeostatic mechanism that loosens connections between overworked synapses, and that sleep helps consolidate and re-form important memories. In this review, we will summarize these theories, but also focus on substantial new information regarding the relation of electrical brain rhythms to sleep. In particular, while REM sleep may contribute to the homeostatic weakening of overactive synapses, a prominent and transient oscillatory rhythm called "sharp-wave ripple" seems to allow for consolidation of behaviorally relevant memories across many structures of the brain. We propose that a theory of sleep involving the division of labor between two states of sleep-REM and non-REM, the latter of which has an abundance of ripple electrical activity-might allow for a fusion of the two main sleep theories. This theory then postulates that sleep performs a combination of consolidation and homeostasis that promotes optimal knowledge retention as well as optimal waking brain function.

  10. Direct conversion of hydride- to siloxane-terminated silicon quantum dots

    DOE PAGES

    Anderson, Ryan T.; Zang, Xiaoning; Fernando, Roshan; ...

    2016-10-20

    Here, peripheral surface functionalization of hydride-terminated silicon quantum dots (SiQD) is necessary in order to minimize their oxidation/aggregation and allow for solution processability. Historically thermal hydrosilylation addition of alkenes and alkynes across the Si-H surface to form Si-C bonds has been the primary method to achieve this. Here we demonstrate a mild alternative approach to functionalize hydride-terminated SiQDs using bulky silanols in the presence of free-radical initiators to form stable siloxane (~Si-O-SiR 3) surfaces with hydrogen gas as a byproduct. This offers an alternative to existing methods of forming siloxane surfaces that require corrosive Si-Cl based chemistry with HCl byproducts.more » A 52 nm blue shift in the photoluminescent spectra of siloxane versus alkyl-functionalized SiQDs is observed that we explain using computational theory. Model compound synthesis of silane and silsesquioxane analogues is used to optimize surface chemistry and elucidate reaction mechanisms. Thorough characterization on the extent of siloxane surface coverage is provided using FTIR and XPS. As a result, TEM is used to demonstrate SiQD size and integrity after surface chemistry and product isolation.« less

  11. Estimation of time averages from irregularly spaced observations - With application to coastal zone color scanner estimates of chlorophyll concentration

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Schlax, Michael G.

    1991-01-01

    The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.

  12. Nonlinear optimization method of ship floating condition calculation in wave based on vector

    NASA Astrophysics Data System (ADS)

    Ding, Ning; Yu, Jian-xing

    2014-08-01

    Ship floating condition in regular waves is calculated. New equations controlling any ship's floating condition are proposed by use of the vector operation. This form is a nonlinear optimization problem which can be solved using the penalty function method with constant coefficients. And the solving process is accelerated by dichotomy. During the solving process, the ship's displacement and buoyant centre have been calculated by the integration of the ship surface according to the waterline. The ship surface is described using an accumulative chord length theory in order to determine the displacement, the buoyancy center and the waterline. The draught forming the waterline at each station can be found out by calculating the intersection of the ship surface and the wave surface. The results of an example indicate that this method is exact and efficient. It can calculate the ship floating condition in regular waves as well as simplify the calculation and improve the computational efficiency and the precision of results.

  13. Protein Folding Free Energy Landscape along the Committor - the Optimal Folding Coordinate.

    PubMed

    Krivov, Sergei V

    2018-06-06

    Recent advances in simulation and experiment have led to dramatic increases in the quantity and complexity of produced data, which makes the development of automated analysis tools very important. A powerful approach to analyze dynamics contained in such data sets is to describe/approximate it by diffusion on a free energy landscape - free energy as a function of reaction coordinates (RC). For the description to be quantitatively accurate, RCs should be chosen in an optimal way. Recent theoretical results show that such an optimal RC exists; however, determining it for practical systems is a very difficult unsolved problem. Here we describe a solution to this problem. We describe an adaptive nonparametric approach to accurately determine the optimal RC (the committor) for an equilibrium trajectory of a realistic system. In contrast to alternative approaches, which require a functional form with many parameters to approximate an RC and thus extensive expertise with the system, the suggested approach is nonparametric and can approximate any RC with high accuracy without system specific information. To avoid overfitting for a realistically sampled system, the approach performs RC optimization in an adaptive manner by focusing optimization on less optimized spatiotemporal regions of the RC. The power of the approach is illustrated on a long equilibrium atomistic folding simulation of HP35 protein. We have determined the optimal folding RC - the committor, which was confirmed by passing a stringent committor validation test. It allowed us to determine a first quantitatively accurate protein folding free energy landscape. We have confirmed the recent theoretical results that diffusion on such a free energy profile can be used to compute exactly the equilibrium flux, the mean first passage times, and the mean transition path times between any two points on the profile. We have shown that the mean squared displacement along the optimal RC grows linear with time as for simple diffusion. The free energy profile allowed us to obtain a direct rigorous estimate of the pre-exponential factor for the folding dynamics.

  14. Features of spatial and functional segregation and integration of the primate connectome revealed by trade-off between wiring cost and efficiency

    PubMed Central

    Chen, Yuhan; Wang, Shengjun

    2017-01-01

    The primate connectome, possessing a characteristic global topology and specific regional connectivity profiles, is well organized to support both segregated and integrated brain function. However, the organization mechanisms shaping the characteristic connectivity and its relationship to functional requirements remain unclear. The primate brain connectome is shaped by metabolic economy as well as functional values. Here, we explored the influence of two competing factors and additional advanced functional requirements on the primate connectome employing an optimal trade-off model between neural wiring cost and the representative functional requirement of processing efficiency. Moreover, we compared this model with a generative model combining spatial distance and topological similarity, with the objective of statistically reproducing multiple topological features of the network. The primate connectome indeed displays a cost-efficiency trade-off and that up to 67% of the connections were recovered by optimal combination of the two basic factors of wiring economy and processing efficiency, clearly higher than the proportion of connections (56%) explained by the generative model. While not explicitly aimed for, the trade-off model captured several key topological features of the real connectome as the generative model, yet better explained the connectivity of most regions. The majority of the remaining 33% of connections unexplained by the best trade-off model were long-distance links, which are concentrated on few cortical areas, termed long-distance connectors (LDCs). The LDCs are mainly non-hubs, but form a densely connected group overlapping on spatially segregated functional modalities. LDCs are crucial for both functional segregation and integration across different scales. These organization features revealed by the optimization analysis provide evidence that the demands of advanced functional segregation and integration among spatially distributed regions may play a significant role in shaping the cortical connectome, in addition to the basic cost-efficiency trade-off. These findings also shed light on inherent vulnerabilities of brain networks in diseases. PMID:28961235

  15. Features of spatial and functional segregation and integration of the primate connectome revealed by trade-off between wiring cost and efficiency.

    PubMed

    Chen, Yuhan; Wang, Shengjun; Hilgetag, Claus C; Zhou, Changsong

    2017-09-01

    The primate connectome, possessing a characteristic global topology and specific regional connectivity profiles, is well organized to support both segregated and integrated brain function. However, the organization mechanisms shaping the characteristic connectivity and its relationship to functional requirements remain unclear. The primate brain connectome is shaped by metabolic economy as well as functional values. Here, we explored the influence of two competing factors and additional advanced functional requirements on the primate connectome employing an optimal trade-off model between neural wiring cost and the representative functional requirement of processing efficiency. Moreover, we compared this model with a generative model combining spatial distance and topological similarity, with the objective of statistically reproducing multiple topological features of the network. The primate connectome indeed displays a cost-efficiency trade-off and that up to 67% of the connections were recovered by optimal combination of the two basic factors of wiring economy and processing efficiency, clearly higher than the proportion of connections (56%) explained by the generative model. While not explicitly aimed for, the trade-off model captured several key topological features of the real connectome as the generative model, yet better explained the connectivity of most regions. The majority of the remaining 33% of connections unexplained by the best trade-off model were long-distance links, which are concentrated on few cortical areas, termed long-distance connectors (LDCs). The LDCs are mainly non-hubs, but form a densely connected group overlapping on spatially segregated functional modalities. LDCs are crucial for both functional segregation and integration across different scales. These organization features revealed by the optimization analysis provide evidence that the demands of advanced functional segregation and integration among spatially distributed regions may play a significant role in shaping the cortical connectome, in addition to the basic cost-efficiency trade-off. These findings also shed light on inherent vulnerabilities of brain networks in diseases.

  16. Rejuvenating cellular respiration for optimizing respiratory function: targeting mitochondria.

    PubMed

    Agrawal, Anurag; Mabalirajan, Ulaganathan

    2016-01-15

    Altered bioenergetics with increased mitochondrial reactive oxygen species production and degradation of epithelial function are key aspects of pathogenesis in asthma and chronic obstructive pulmonary disease (COPD). This motif is not unique to obstructive airway disease, reported in related airway diseases such as bronchopulmonary dysplasia and parenchymal diseases such as pulmonary fibrosis. Similarly, mitochondrial dysfunction in vascular endothelium or skeletal muscles contributes to the development of pulmonary hypertension and systemic manifestations of lung disease. In experimental models of COPD or asthma, the use of mitochondria-targeted antioxidants, such as MitoQ, has substantially improved mitochondrial health and restored respiratory function. Modulation of noncoding RNA or protein regulators of mitochondrial biogenesis, dynamics, or degradation has been found to be effective in models of fibrosis, emphysema, asthma, and pulmonary hypertension. Transfer of healthy mitochondria to epithelial cells has been associated with remarkable therapeutic efficacy in models of acute lung injury and asthma. Together, these form a 3R model--repair, reprogramming, and replacement--for mitochondria-targeted therapies in lung disease. This review highlights the key role of mitochondrial function in lung health and disease, with a focus on asthma and COPD, and provides an overview of mitochondria-targeted strategies for rejuvenating cellular respiration and optimizing respiratory function in lung diseases. Copyright © 2016 the American Physiological Society.

  17. White blood cell segmentation by circle detection using electromagnetism-like optimization.

    PubMed

    Cuevas, Erik; Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability.

  18. White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization

    PubMed Central

    Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability. PMID:23476713

  19. Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback

    NASA Astrophysics Data System (ADS)

    Bruni, Renato; Celani, Fabio

    2016-10-01

    The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.

  20. Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties

    NASA Astrophysics Data System (ADS)

    Li, Yongzhe; Vorobyov, Sergiy A.

    2018-03-01

    In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.

  1. pySecDec: A toolbox for the numerical evaluation of multi-scale integrals

    NASA Astrophysics Data System (ADS)

    Borowka, S.; Heinrich, G.; Jahn, S.; Jones, S. P.; Kerner, M.; Schlenk, J.; Zirke, T.

    2018-01-01

    We present pySECDEC, a new version of the program SECDEC, which performs the factorization of dimensionally regulated poles in parametric integrals, and the subsequent numerical evaluation of the finite coefficients. The algebraic part of the program is now written in the form of python modules, which allow a very flexible usage. The optimization of the C++ code, generated using FORM, is improved, leading to a faster numerical convergence. The new version also creates a library of the integrand functions, such that it can be linked to user-specific codes for the evaluation of matrix elements in a way similar to analytic integral libraries.

  2. Closed-form estimates of the domain of attraction for nonlinear systems via fuzzy-polynomial models.

    PubMed

    Pitarch, José Luis; Sala, Antonio; Ariño, Carlos Vicente

    2014-04-01

    In this paper, the domain of attraction of the origin of a nonlinear system is estimated in closed form via level sets with polynomial boundaries, iteratively computed. In particular, the domain of attraction is expanded from a previous estimate, such as a classical Lyapunov level set. With the use of fuzzy-polynomial models, the domain of attraction analysis can be carried out via sum of squares optimization and an iterative algorithm. The result is a function that bounds the domain of attraction, free from the usual restriction of being positive and decrescent in all the interior of its level sets.

  3. Fuzzy multiobjective models for optimal operation of a hydropower system

    NASA Astrophysics Data System (ADS)

    Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.

    2013-06-01

    Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.

  4. Bernoulli substitution in the Ramsey model: Optimal trajectories under control constraints

    NASA Astrophysics Data System (ADS)

    Krasovskii, A. A.; Lebedev, P. D.; Tarasyev, A. M.

    2017-05-01

    We consider a neoclassical (economic) growth model. A nonlinear Ramsey equation, modeling capital dynamics, in the case of Cobb-Douglas production function is reduced to the linear differential equation via a Bernoulli substitution. This considerably facilitates the search for a solution to the optimal growth problem with logarithmic preferences. The study deals with solving the corresponding infinite horizon optimal control problem. We consider a vector field of the Hamiltonian system in the Pontryagin maximum principle, taking into account control constraints. We prove the existence of two alternative steady states, depending on the constraints. A proposed algorithm for constructing growth trajectories combines methods of open-loop control and closed-loop regulatory control. For some levels of constraints and initial conditions, a closed-form solution is obtained. We also demonstrate the impact of technological change on the economic equilibrium dynamics. Results are supported by computer calculations.

  5. Output-Feedback Control of Unknown Linear Discrete-Time Systems With Stochastic Measurement and Process Noise via Approximate Dynamic Programming.

    PubMed

    Wang, Jun-Sheng; Yang, Guang-Hong

    2017-07-25

    This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.

  6. Market penetration of energy supply technologies

    NASA Astrophysics Data System (ADS)

    Condap, R. J.

    1980-03-01

    Techniques to incorporate the concepts of profit-induced growth and risk aversion into policy-oriented optimization models of the domestic energy sector are examined. After reviewing the pertinent market penetration literature, simple mathematical programs in which the introduction of new energy technologies is constrained primarily by the reinvestment of profits are formulated. The main results involve the convergence behavior of technology production levels under various assumptions about the form of the energy demand function. Next, profitability growth constraints are embedded in a full-scale model of U.S. energy-economy interactions. A rapidly convergent algorithm is developed to utilize optimal shadow prices in the computation of profitability for individual technologies. Allowance is made for additional policy variables such as government funding and taxation. The result is an optimal deployment schedule for current and future energy technologies which is consistent with the sector's ability to finance capacity expansion.

  7. Study on Edge Thickening Flow Forming Using the Finite Elements Analysis

    NASA Astrophysics Data System (ADS)

    Kim, Young Jin; Park, Jin Sung; Cho, Chongdu

    2011-08-01

    This study is to examine the forming features of flow stress property and the incremental forming method with increasing the thickness of material. Recently, the optimized forming method is widely studied through the finite element analysis to optimize forming process conditions in many different forming fields. The optimal forming method should be adopted to meet geometric requirements as the reduction in volume per unit length of material such as forging, rolling, spinning etc. However conventional studies have not dealt with issue regarding volume per unit length. For the study we use the finite element method and model a gear part of an automotive engine flywheel as the study model, which is a weld assembly of a plate and a gear with respective different thickness. In simulation of the present study, a optimized forming condition for gear machining, considering the thickness of the outer edge of flywheel is studied using the finite elements analysis for the increasing thickness of the forming method. It is concluded from the study that forming method to increase the thickness per unit length for gear machining is reasonable using the finite elements analysis and forming test.

  8. A test strip platform based on DNA-functionalized gold nanoparticles for on-site detection of mercury (II) ions.

    PubMed

    Guo, Zhiyong; Duan, Jing; Yang, Fei; Li, Min; Hao, Tingting; Wang, Sui; Wei, Danyi

    2012-05-15

    A test strip, based on DNA-functionalized gold nanoparticles for Hg(2+) detection, has been developed, optimized and validated. The developed colorimetric mercury sensor system exhibited a highly sensitive and selective response to mercury. The measurement principle is based on thymine-Hg(2+)-thymine (T-Hg(2+)-T) coordination chemistry and streptavidin-biotin interaction. A biotin-labeled and thiolated DNA was immobilized on the gold nanoparticles (AuNPs) surface through a self-assembling method. Another thymine-rich DNA, which was introduced to form DNA duplexes on the AuNPs surface with thymine-Hg(2+)-thymine (T-Hg(2+)-T) coordination in the presence of Hg(2+), was immobilized on the nitrocellulose membrane as the test zone. When Hg(2+) ions were introduced into this system, they induced the two strands of DNA to intertwist by forming T-Hg(2+)-T bonds resulting in a red line at the test zone. The biotin-labeled and thiolated DNA-functionalized AuNPs could be captured by streptavidin which was immobilized on the nitrocellulose membrane as the control zone. Under optimized conditions, the detection limit for Hg(2+) was 3 nM, which is lower than the 10nM, maximum contaminant limit defined by the US Environmental Protection Agency (EPA) for drinking water. A parallel analysis of Hg(2+) in pool water samples using cold vapor atomic absorption spectrometry showed comparable results to those obtained from the strip test. Therefore, the results obtained in this study could be used as basic research for the development of Hg(2+) detection, and the method developed could be a potential on-site screening tool for the rapid detection of Hg(2+) in different water samples without special instrumentation. All experimental variables that influence the test strip response were optimized and reported. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. A density functional theory study on the hydrogen bonding interactions between luteolin and ethanol.

    PubMed

    Zheng, Yan-Zhen; Xu, Jing; Liang, Qin; Chen, Da-Fu; Guo, Rui; Fu, Zhong-Min

    2017-08-01

    Ethanol is one of the most commonly used solvents to extract flavonoids from propolis. Hydrogen bonding interactions play an important role in the properties of liquid system. The main objective of the work is to study the hydrogen bonding interactions between flavonoid and ethanol. Luteolin is a very common flavonoid that has been found in different geographical and botanical propolis. In this work, it was selected as the representative flavonoid to do detailed research. The study was performed from a theoretical perspective using density functional theory (DFT) method. After careful optimization, there exist nine optimized geometries for the luteolin - CH 3 CH 2 OH complex. The binding distance of X - H···O, and the bond length, vibrational frequency, and electron density changes of X - H all indicate the formation of the hydrogen bond in the optimized geometries. In the optimized geometries, it is found that: (1) except for the H2', H5', and H6', CH 3 CH 2 OH has formed hydrogen bonds with all the hydrogen and oxygen atoms in luteolin. The hydrogen atoms in the hydroxyl groups of luteolin form the strongest hydrogen bonds with CH 3 CH 2 OH; (2) all of the hydrogen bonds are closed-shell interactions; (3) the strongest hydrogen bond is the O3' - H3'···O in structure A, while the weakest one is the C3 - H3···O in structure E; (4) the hydrogen bonds of O3' - H3'···O, O - H···O4, O - H···O3' and O - H···O7 are medium strength and covalent dominant in nature. While the other hydrogen bonds are weak strength and possess a dominant character of the electrostatic interactions in nature.

  10. Computational Optimization and Characterization of Molecularly Imprinted Polymers

    NASA Astrophysics Data System (ADS)

    Terracina, Jacob J.

    Molecularly imprinted polymers (MIPs) are a class of materials containing sites capable of selectively binding to the imprinted target molecule. Computational chemistry techniques were used to study the effect of different fabrication parameters (the monomer-to-target ratios, pre-polymerization solvent, temperature, and pH) on the formation of the MIP binding sites. Imprinted binding sites were built in silico for the purposes of better characterizing the receptor - ligand interactions. Chiefly, the sites were characterized with respect to their selectivities and the heterogeneity between sites. First, a series of two-step molecular mechanics (MM) and quantum mechanics (QM) computational optimizations of monomer -- target systems was used to determine optimal monomer-to-target ratios for the MIPs. Imidazole- and xanthine-derived target molecules were studied. The investigation included both small-scale models (one-target) and larger scale models (five-targets). The optimal ratios differed between the small and larger scales. For the larger models containing multiple targets, binding-site surface area analysis was used to evaluate the heterogeneity of the sites. The more fully surrounded sites had greater binding energies. Molecular docking was then used to measure the selectivities of the QM-optimized binding sites by comparing the binding energies of the imprinted target to that of a structural analogue. Selectivity was also shown to improve as binding sites become more fully encased by the monomers. For internal sites, docking consistently showed selectivity favoring the molecules that had been imprinted via QM geometry optimizations. The computationally imprinted sites were shown to exhibit size-, shape-, and polarity-based selectivity. This represented a novel approach to investigate the selectivity and heterogeneity of imprinted polymer binding sites, by applying the rapid orientation screening of MM docking to the highly accurate QM-optimized geometries. Next, we sought to computationally construct and investigate binding sites for their enantioselectivity. Again, a two-step MM [special characters removed] QM optimization scheme was used to "computationally imprint" chiral molecules. Using docking techniques, the imprinted binding sites were shown to exhibit an enantioselective preference for the imprinted molecule over its enantiomer. Docking of structurally similar chiral molecules showed that the sites computationally imprinted with R- or S-tBOC-tyrosine were able to differentiate between R- and S-forms of other tyrosine derivatives. The cross-enantioselectivity did not hold for chiral molecules that did not share the tyrosine H-bonding functional group orientations. Further analysis of the individual monomer - target interactions within the binding site led us to conclude that H-bonding functional groups that are located immediately next to the target's chiral center, and therefore spatially fixed relative to the chiral center, will have a stronger contribution to the enantioselectivity of the site than those groups separated from the chiral center by two or more rotatable bonds. These models were the first computationally imprinted binding sites to exhibit this enantioselective preference for the imprinted target molecules. Finally, molecular dynamics (MD) was used to quantify H-bonding interactions between target molecules, monomers, and solvents representative of the pre-polymerization matrix. It was found that both target dimerization and solvent interference decrease the number of monomer - target H-bonds present. Systems were optimized via simulated annealing to create binding sites that were then subjected to molecular docking analysis. Docking showed that the presence of solvent had a detrimental effect on the sensitivity and selectivity of the sites, and that solvents with more H-bonding capabilities were more disruptive to the binding properties of the site. Dynamic simulations also showed that increasing the temperature of the solution can significantly decrease the number of H-bonds formed between the targets and monomers. It is believed that the monomer - target complexes formed within the pre-polymerization matrix are translated into the selective binding cavities formed during polymerization. Elucidating the nature of these interactions in silico improves our understanding of MIPs, ultimately allowing for more optimized sensing materials.

  11. Methods of Constructing a Blended Performance Function Suitable for Formation Flight

    NASA Technical Reports Server (NTRS)

    Ryan, John J.

    2017-01-01

    This paper presents two methods for constructing an approximate performance function of a desired parameter using correlated parameters. The methods are useful when real-time measurements of a desired performance function are not available to applications such as extremum-seeking control systems. The first method approximates an a priori measured or estimated desired performance function by combining real-time measurements of readily available correlated parameters. The parameters are combined using a weighting vector determined from a minimum-squares optimization to form a blended performance function. The blended performance function better matches the desired performance function mini- mum than single-measurement performance functions. The second method expands upon the first by replacing the a priori data with near-real-time measurements of the desired performance function. The resulting blended performance function weighting vector is up- dated when measurements of the desired performance function are available. Both methods are applied to data collected during formation- flight-for-drag-reduction flight experiments.

  12. Genetic algorithms - What fitness scaling is optimal?

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac

    1993-01-01

    A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.

  13. Chinese version of the Optimism and Pessimism Scale: Psychometric properties in mainland China and development of a short form.

    PubMed

    Xia, Jie; Wu, Daxing; Zhang, Jibiao; Xu, Yuanchao; Xu, Yunxuan

    2016-06-01

    This study aimed to validate the Chinese version of the Optimism and Pessimism Scale in a sample of 730 adult Chinese individuals. Confirmatory factor analyses confirmed the bidimensionality of the scale with two factors, optimism and pessimism. The total scale and optimism and pessimism factors demonstrated satisfactory reliability and validity. Population-based normative data and mean values for gender, age, and education were determined. Furthermore, we developed a 20-item short form of the Chinese version of the Optimism and Pessimism Scale with structural validity comparable to the full form. In summary, the Chinese version of the Optimism and Pessimism Scale is an appropriate and practical tool for epidemiological research in mainland China. © The Author(s) 2014.

  14. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.

  15. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  16. Designs for thermal harvesting with nonlinear coordinate transformation

    NASA Astrophysics Data System (ADS)

    Ji, Qingxiang; Fang, Guodong; Liang, Jun

    2018-04-01

    In this paper a thermal concentrating design method was proposed based on the concept of generating function without knowing the needed coordinate transformation beforehand. The thermal harvesting performance was quantitatively characterized by heat concentrating efficiency and external temperature perturbation. Nonlinear transformations of different forms were employed to design high order thermal concentrators, and corresponding harvesting performances were investigated by numerical simulations. The numerical results shows that the form of coordinate transformation directly influences the distributions of heat flows inside the concentrator, consequently, influences the thermal harvesting behaviors significantly. The concentrating performance can be actively controlled and optimized by changing the form of coordinate transformations. The analysis in this paper offers a beneficial method to flexibly tune the harvesting performance of the thermal concentrator according to the requirements of practical applications.

  17. Designer's unified cost model

    NASA Technical Reports Server (NTRS)

    Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.

    1992-01-01

    A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  18. [Current problems of school education and ways of its hygienic optimization].

    PubMed

    Stepanova, M I; Sazaniuk, Z I; Voronova, B Z; Polenova, M A

    2009-01-01

    The aim of the study was to analyse effects of various innovative forms of school education on the health status and functional abilities of children and adolescents. Enhanced academic loads are shown to be the most unfavourable factor of the school environment. The main consequences of excess teaching load are shortened motor and outdoor activities of the children, smaller duration of night sleep. Optimization of academic routine (alternation of studies and holidays), modular structure of school calendar might help to reduce fatigue during school hours. Hygienic estimates of different variants of specialized education are obtained. Scientifically sound hygienic requirements are proposed to be applied to the organization of academic activities in a new type of educational institutions, full-day schools.

  19. Discrete optimal control approach to a four-dimensional guidance problem near terminal areas

    NASA Technical Reports Server (NTRS)

    Nagarajan, N.

    1974-01-01

    Description of a computer-oriented technique to generate the necessary control inputs to guide an aircraft in a given time from a given initial state to a prescribed final state subject to the constraints on airspeed, acceleration, and pitch and bank angles of the aircraft. A discrete-time mathematical model requiring five state variables and three control variables is obtained, assuming steady wind and zero sideslip. The guidance problem is posed as a discrete nonlinear optimal control problem with a cost functional of Bolza form. A solution technique for the control problem is investigated, and numerical examples are presented. It is believed that this approach should prove to be useful in automated air traffic control schemes near large terminal areas.

  20. Mayer control problem with probabilistic uncertainty on initial positions

    NASA Astrophysics Data System (ADS)

    Marigonda, Antonio; Quincampoix, Marc

    2018-03-01

    In this paper we introduce and study an optimal control problem in the Mayer's form in the space of probability measures on Rn endowed with the Wasserstein distance. Our aim is to study optimality conditions when the knowledge of the initial state and velocity is subject to some uncertainty, which are modeled by a probability measure on Rd and by a vector-valued measure on Rd, respectively. We provide a characterization of the value function of such a problem as unique solution of an Hamilton-Jacobi-Bellman equation in the space of measures in a suitable viscosity sense. Some applications to a pursuit-evasion game with uncertainty in the state space is also discussed, proving the existence of a value for the game.

  1. Particle-size distribution modified effective medium theory and validation by magneto-dielectric Co-Ti substituted BaM ferrite composites

    NASA Astrophysics Data System (ADS)

    Li, Qifan; Chen, Yajie; Harris, Vincent G.

    2018-05-01

    This letter reports an extended effective medium theory (EMT) including particle-size distribution functions to maximize the magnetic properties of magneto-dielectric composites. It is experimentally verified by Co-Ti substituted barium ferrite (BaCoxTixFe12-2xO19)/wax composites with specifically designed particle-size distributions. In the form of an integral equation, the extended EMT formula essentially takes the size-dependent parameters of magnetic particle fillers into account. It predicts the effective permeability of magneto-dielectric composites with various particle-size distributions, indicating an optimal distribution for a population of magnetic particles. The improvement of the optimized effective permeability is significant concerning magnetic particles whose properties are strongly size dependent.

  2. Application of Patterson-function direct methods to materials characterization.

    PubMed

    Rius, Jordi

    2014-09-01

    The aim of this article is a general description of the so-called Patterson-function direct methods (PFDM), from their origin to their present state. It covers a 20-year period of methodological contributions to crystal structure solution, most of them published in Acta Crystallographica Section A. The common feature of these variants of direct methods is the introduction of the experimental intensities in the form of the Fourier coefficients of origin-free Patterson-type functions, which allows the active use of both strong and weak reflections. The different optimization algorithms are discussed and their performances compared. This review focuses not only on those PFDM applications related to powder diffraction data but also on some recent results obtained with electron diffraction tomography data.

  3. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  4. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  5. An analysis of value function learning with piecewise linear control

    NASA Astrophysics Data System (ADS)

    Tutsoy, Onder; Brown, Martin

    2016-05-01

    Reinforcement learning (RL) algorithms attempt to learn optimal control actions by iteratively estimating a long-term measure of system performance, the so-called value function. For example, RL algorithms have been applied to walking robots to examine the connection between robot motion and the brain, which is known as embodied cognition. In this paper, RL algorithms are analysed using an exemplar test problem. A closed form solution for the value function is calculated and this is represented in terms of a set of basis functions and parameters, which is used to investigate parameter convergence. The value function expression is shown to have a polynomial form where the polynomial terms depend on the plant's parameters and the value function's discount factor. It is shown that the temporal difference error introduces a null space for the differenced higher order basis associated with the effects of controller switching (saturated to linear control or terminating an experiment) apart from the time of the switch. This leads to slow convergence in the relevant subspace. It is also shown that badly conditioned learning problems can occur, and this is a function of the value function discount factor and the controller switching points. Finally, a comparison is performed between the residual gradient and TD(0) learning algorithms, and it is shown that the former has a faster rate of convergence for this test problem.

  6. Extending amulti-scale parameter regionalization (MPR) method by introducing parameter constrained optimization and flexible transfer functions

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2015-04-01

    A multi-scale parameter-estimation method, as presented by Samaniego et al. (2010), is implemented and extended for the conceptual hydrological model COSERO. COSERO is a HBV-type model that is specialized for alpine-environments, but has been applied over a wide range of basins all over the world (see: Kling et al., 2014 for an overview). Within the methodology available small-scale information (DEM, soil texture, land cover, etc.) is used to estimate the coarse-scale model parameters by applying a set of transfer-functions (TFs) and subsequent averaging methods, whereby only TF hyper-parameters are optimized against available observations (e.g. runoff data). The parameter regionalisation approach was extended in order to allow for a more meta-heuristical handling of the transfer-functions. The two main novelties are: 1. An explicit introduction of constrains into parameter estimation scheme: The constraint scheme replaces invalid parts of the transfer-function-solution space with valid solutions. It is inspired by applications in evolutionary algorithms and related to the combination of learning and evolution. This allows the consideration of physical and numerical constraints as well as the incorporation of a priori modeller-experience into the parameter estimation. 2. Spline-based transfer-functions: Spline-based functions enable arbitrary forms of transfer-functions: This is of importance since in many cases the general relationship between sub-grid information and parameters are known, but not the form of the transfer-function itself. The contribution presents the results and experiences with the adopted method and the introduced extensions. Simulation are performed for the pre-alpine/alpine Traisen catchment in Lower Austria. References: Samaniego, L., Kumar, R., Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., doi: 10.1029/2008WR007327 Kling, H., Stanzel, P., Fuchs, M., and Nachtnebel, H. P. (2014): Performance of the COSERO precipitation-runoff model under non-stationary conditions in basins with different climates, Hydrolog. Sci. J., doi: 10.1080/02626667.2014.959956.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den

    Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testingmore » and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.« less

  8. Content-aware photo collage using circle packing.

    PubMed

    Yu, Zongqiao; Lu, Lin; Guo, Yanwen; Fan, Rongfei; Liu, Mingming; Wang, Wenping

    2014-02-01

    In this paper, we present a novel approach for automatically creating the photo collage that assembles the interest regions of a given group of images naturally. Previous methods on photo collage are generally built upon a well-defined optimization framework, which computes all the geometric parameters and layer indices for input photos on the given canvas by optimizing a unified objective function. The complex nonlinear form of optimization function limits their scalability and efficiency. From the geometric point of view, we recast the generation of collage as a region partition problem such that each image is displayed in its corresponding region partitioned from the canvas. The core of this is an efficient power-diagram-based circle packing algorithm that arranges a series of circles assigned to input photos compactly in the given canvas. To favor important photos, the circles are associated with image importances determined by an image ranking process. A heuristic search process is developed to ensure that salient information of each photo is displayed in the polygonal area resulting from circle packing. With our new formulation, each factor influencing the state of a photo is optimized in an independent stage, and computation of the optimal states for neighboring photos are completely decoupled. This improves the scalability of collage results and ensures their diversity. We also devise a saliency-based image fusion scheme to generate seamless compositive collage. Our approach can generate the collages on nonrectangular canvases and supports interactive collage that allows the user to refine collage results according to his/her personal preferences. We conduct extensive experiments and show the superiority of our algorithm by comparing against previous methods.

  9. Minimum noise impact aircraft trajectories

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.; Melton, R. G.

    1981-01-01

    Numerical optimization is used to compute the optimum flight paths, based upon a parametric form that implicitly includes some of the problem restrictions. The other constraints are formulated as penalties in the cost function. Various aircraft on multiple trajectores (landing and takeoff) can be considered. The modular design employed allows for the substitution of alternate models of the population distribution, aircraft noise, flight paths, and annoyance, or for the addition of other features (e.g., fuel consumption) in the cost function. A reduction in the required amount of searching over local minima was achieved through use of the presence of statistical lateral dispersion in the flight paths.

  10. Activities of the NASA sponsored SRI technology applications team in transferring aerospace technology to the public sector

    NASA Technical Reports Server (NTRS)

    Berke, J. G.

    1971-01-01

    The organization and functions of an interdisciplinary team for the application of aerospace generated technology to the solution of discrete technological problems within the public sector are presented. The interdisciplinary group formed at Stanford Research Institute, California is discussed. The functions of the group are to develop and conduct a program not only optimizing the match between public sector technological problems in criminalistics, transportation, and the postal services and potential solutions found in the aerospace data base, but ensuring that appropriate solutions are acutally utilized. The work accomplished during the period from July 1, 1970 to June 30, 1971 is reported.

  11. Self-organization of grafted polyelectrolyte layers via the coupling of chemical equilibrium and physical interactions.

    PubMed

    Tagliazucchi, Mario; de la Cruz, Mónica Olvera; Szleifer, Igal

    2010-03-23

    The competition between chemical equilibrium, for example protonation, and physical interactions determines the molecular organization and functionality of biological and synthetic systems. Charge regulation by displacement of acid-base equilibrium induced by changes in the local environment provides a feedback mechanism that controls the balance between electrostatic, van der Waals, steric interactions and molecular organization. Which strategies do responsive systems follow to globally optimize chemical equilibrium and physical interactions? We address this question by theoretically studying model layers of end-grafted polyacids. These layers spontaneously form self-assembled aggregates, presenting domains of controlled local pH and whose morphologies can be manipulated by the composition of the solution in contact with the film. Charge regulation stabilizes micellar domains over a wide range of pH by reducing the local charge in the aggregate at the cost of chemical free energy and gaining in hydrophobic interactions. This balance determines the boundaries between different aggregate morphologies. We show that a qualitatively new form of organization arises from the coupling between physical interactions and protonation equilibrium. This optimization strategy presents itself with polyelectrolytes coexisting in two different and well-defined protonation states. Our results underline the need of considering the coupling between chemical equilibrium and physical interactions due to their highly nonadditive behavior. The predictions provide guidelines for the creation of responsive polymer layers presenting self-organized patterns with functional properties and they give insights for the understanding of competing interactions in highly inhomogeneous and constrained environments such as those relevant in nanotechnology and those responsible for biological cells function.

  12. Combined Economic and Hydrologic Modeling to Support Collaborative Decision Making Processes

    NASA Astrophysics Data System (ADS)

    Sheer, D. P.

    2008-12-01

    For more than a decade, the core concept of the author's efforts in support of collaborative decision making has been a combination of hydrologic simulation and multi-objective optimization. The modeling has generally been used to support collaborative decision making processes. The OASIS model developed by HydroLogics Inc. solves a multi-objective optimization at each time step using a mixed integer linear program (MILP). The MILP can be configured to include any user defined objective, including but not limited too economic objectives. For example, an estimated marginal value for water for crops and M&I use were included in the objective function to drive trades in a model of the lower Rio Grande. The formulation of the MILP, constraints and objectives, in any time step is conditional: it changes based on the value of state variables and dynamic external forcing functions, such as rainfall, hydrology, market prices, arrival of migratory fish, water temperature, etc. It therefore acts as a dynamic short term multi-objective economic optimization for each time step. MILP is capable of solving a general problem that includes a very realistic representation of the physical system characteristics in addition to the normal multi-objective optimization objectives and constraints included in economic models. In all of these models, the short term objective function is a surrogate for achieving long term multi-objective results. The long term performance for any alternative (especially including operating strategies) is evaluated by simulation. An operating rule is the combination of conditions, parameters, constraints and objectives used to determine the formulation of the short term optimization in each time step. Heuristic wrappers for the simulation program have been developed improve the parameters of an operating rule, and are initiating research on a wrapper that will allow us to employ a genetic algorithm to improve the form of the rule (conditions, constraints, and short term objectives) as well. In the models operating rules represent different models of human behavior, and the objective of the modeling is to find rules for human behavior that perform well in terms of long term human objectives. The conceptual model used to represent human behavior incorporates economic multi-objective optimization for surrogate objectives, and rules that set those objectives based on current conditions and accounting for uncertainty, at least implicitly. The author asserts that real world operating rules follow this form and have evolved because they have been perceived as successful in the past. Thus, the modeling efforts focus on human behavior in much the same way that economic models focus on human behavior. This paper illustrates the above concepts with real world examples.

  13. Dual-Level Method for Estimating Multistructural Partition Functions with Torsional Anharmonicity.

    PubMed

    Bao, Junwei Lucas; Xing, Lili; Truhlar, Donald G

    2017-06-13

    For molecules with multiple torsions, an accurate evaluation of the molecular partition function requires consideration of multiple structures and their torsional-potential anharmonicity. We previously developed a method called MS-T for this problem, and it requires an exhaustive conformational search with frequency calculations for all the distinguishable conformers; this can become expensive for molecules with a large number of torsions (and hence a large number of structures) if it is carried out with high-level methods. In the present work, we propose a cost-effective method to approximate the MS-T partition function when there are a large number of structures, and we test it on a transition state that has eight torsions. This new method is a dual-level method that combines an exhaustive conformer search carried out by a low-level electronic structure method (for instance, AM1, which is very inexpensive) and selected calculations with a higher-level electronic structure method (for example, density functional theory with a functional that is suitable for conformational analysis and thermochemistry). To provide a severe test of the new method, we consider a transition state structure that has 8 torsional degrees of freedom; this transition state structure is formed along one of the reaction pathways of the hydrogen abstraction reaction (at carbon-1) of ketohydroperoxide (KHP; its IUPAC name is 4-hydroperoxy-2-pentanone) by OH radical. We find that our proposed dual-level method is able to significantly reduce the computational cost for computing MS-T partition functions for this test case with a large number of torsions and with a large number of conformers because we carry out high-level calculations for only a fraction of the distinguishable conformers found by the low-level method. In the example studied here, the dual-level method with 40 high-level optimizations (1.8% of the number of optimizations in a coarse-grained full search and 0.13% of the number of optimizations in a fine-grained full search) reproduces the full calculation of the high-level partition function within a factor of 1.0 to 2.0 from 200 to 1000 K. The error in the dual-level method can be further reduced to factors of 0.6 to 1.1 over the whole temperature interval from 200 to 2400 K by optimizing 128 structures (5.9% of the number of optimizations in a fine-grained full search and 0.41% of the number of optimizations in a fine-grained full search). These factor-of-two or better errors are small compared to errors up to a factor of 1.0 × 10 3 if one neglects multistructural effects for the case under study.

  14. Estimating cellular parameters through optimization procedures: elementary principles and applications.

    PubMed

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  15. A new real-time guidance strategy for aerodynamic ascent flight

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takayuki; Kawaguchi, Jun'ichiro

    2007-12-01

    Reusable launch vehicles are conceived to constitute the future space transportation system. If these vehicles use air-breathing propulsion and lift taking-off horizontally, the optimal steering for these vehicles exhibits completely different behavior from that in conventional rockets flight. In this paper, the new guidance strategy is proposed. This method derives from the optimality condition as for steering and an analysis concludes that the steering function takes the form comprised of Linear and Logarithmic terms, which include only four parameters. The parameter optimization of this method shows the acquired terminal horizontal velocity is almost same with that obtained by the direct numerical optimization. This supports the parameterized Liner Logarithmic steering law. And here is shown that there exists a simple linear relation between the terminal states and the parameters to be corrected. The relation easily makes the parameters determined to satisfy the terminal boundary conditions in real-time. The paper presents the guidance results for the practical application cases. The results show the guidance is well performed and satisfies the terminal boundary conditions specified. The strategy built and presented here does guarantee the robust solution in real-time excluding any optimization process, and it is found quite practical.

  16. Optimization of Actuating Origami Networks

    NASA Astrophysics Data System (ADS)

    Buskohl, Philip; Fuchi, Kazuko; Bazzan, Giorgio; Joo, James; Gregory, Reich; Vaia, Richard

    2015-03-01

    Origami structures morph between 2D and 3D conformations along predetermined fold lines that efficiently program the form, function and mobility of the structure. By leveraging design concepts from action origami, a subset of origami art focused on kinematic mechanisms, reversible folding patterns for applications such as solar array packaging, tunable antennae, and deployable sensing platforms may be designed. However, the enormity of the design space and the need to identify the requisite actuation forces within the structure places a severe limitation on design strategies based on intuition and geometry alone. The present work proposes a topology optimization method, using truss and frame element analysis, to distribute foldline mechanical properties within a reference crease pattern. Known actuating patterns are placed within a reference grid and the optimizer adjusts the fold stiffness of the network to optimally connect them. Design objectives may include a target motion, stress level, or mechanical energy distribution. Results include the validation of known action origami structures and their optimal connectivity within a larger network. This design suite offers an important step toward systematic incorporation of origami design concepts into new, novel and reconfigurable engineering devices. This research is supported under the Air Force Office of Scientific Research (AFOSR) funding, LRIR 13RQ02COR.

  17. Shape design of internal cooling passages within a turbine blade

    NASA Astrophysics Data System (ADS)

    Nowak, Grzegorz; Nowak, Iwona

    2012-04-01

    The article concerns the optimization of the shape and location of non-circular passages cooling the blade of a gas turbine. To model the shape, four Bezier curves which form a closed profile of the passage were used. In order to match the shape of the passage to the blade profile, a technique was put forward to copy and scale the profile fragments into the component, and build the outline of the passage on the basis of them. For so-defined cooling passages, optimization calculations were carried out with a view to finding their optimal shape and location in terms of the assumed objectives. The task was solved as a multi-objective problem with the use of the Pareto method, for a cooling system composed of four and five passages. The tool employed for the optimization was the evolutionary algorithm. The article presents the impact of the population on the task convergence, and discusses the impact of different optimization objectives on the Pareto optimal solutions obtained. Due to the problem of different impacts of individual objectives on the position of the solution front which was noticed during the calculations, a two-step optimization procedure was introduced. Also, comparative optimization calculations for the scalar objective function were carried out and set up against the non-dominated solutions obtained in the Pareto approach. The optimization process resulted in a configuration of the cooling system that allows a significant reduction in the temperature of the blade and its thermal stress.

  18. Development of Ultraviolet (UV) Radiation Protective Fabric Using Combined Electrospinning and Electrospraying Technique

    NASA Astrophysics Data System (ADS)

    Sinha, Mukesh Kumar; Das, B. R.; Kumar, Kamal; Kishore, Brij; Prasad, N. Eswara

    2017-06-01

    The article reports a novel technique for functionization of nanoweb to develop ultraviolet (UV) radiation protective fabric. UV radiation protection effect is produced by combination of electrospinning and electrospraying technique. A nanofibrous web of polyvinylidene difluoride (PVDF) coated on polypropylene nonwoven fabric is produced by latest nanospider technology. Subsequently, web is functionalized by titanium dioxide (TiO2). The developed web is characterized for evaluation of surface morphology and other functional properties; mechanical, chemical, crystalline and thermal. An optimal (judicious) nanofibre spinning condition is achieved and established. The produced web is uniformly coated by defect free functional nanofibres in a continuous form of useable textile structural membrane for ultraviolet (UV) protective clothing. This research initiative succeeds in preparation and optimization of various nanowebs for UV protection. Field Emission Scanning Electron Microscope (FESEM) result reveals that PVDF webs photo-degradative behavior is non-accelerated, as compared to normal polymeric grade fibres. Functionalization with TiO2 has enhanced the photo-stability of webs. The ultraviolet protection factor of functionalized and non-functionalized nanowebs empirically evaluated to be 65 and 24 respectively. The developed coated layer could be exploited for developing various defence, para-military and civilian UV protective light weight clothing (tent, covers and shelter segments, combat suit, snow bound camouflaging nets). This research therefore, is conducted in an attempt to develop a scientific understanding of PVDF fibre coated webs for photo-degradation and applications for defence protective textiles. This technological research in laboratory scale could be translated into bulk productionization.

  19. Optimal Operation of Energy Storage in Power Transmission and Distribution

    NASA Astrophysics Data System (ADS)

    Akhavan Hejazi, Seyed Hossein

    In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider uncertainty from various elements, such as solar photovoltaic , electric vehicle chargers, and residential baseloads, in the form of discrete probability functions. In the last part of this thesis we address some other resources and concepts for enhancing the operation of power distribution and transmission systems. In particular, we proposed a new framework to determine the best sites, sizes, and optimal payment incentives under special contracts for committed-type DG projects to offset distribution network investment costs. In this framework, the aim is to allocate DGs such that the profit gained by the distribution company is maximized while each DG unit's individual profit is also taken into account to assure that private DG investment remains economical.

  20. Optimized Enhanced Bioremediation Through 4D Geophysical Monitoring and Autonomous Data Collection, Processing and Analysis

    DTIC Science & Technology

    2014-09-01

    High Fructose Corn Syrup Diluted 1 to 10 percent by weight 50 to 500 mg/l Slow Release Whey (fresh/powered) Dissolved (powdered form) or injected...the assessment of remedial progress and functioning. This project also addressed several high priority needs from the Navy Environmental Quality...memory high -performance computing systems. For instance, as of March 2012 the code has been successfully executed on 2 cpu’s for an inversion problem

  1. An integrated theoretical and experimental investigation of insensitive munition compounds adsorption on cellulose, cellulose triacetate, chitin and chitosan surfaces.

    PubMed

    Gurtowski, Luke A; Griggs, Chris S; Gude, Veera G; Shukla, Manoj K

    2018-02-01

    This manuscript reports results of combined computational chemistry and batch adsorption investigation of insensitive munition compounds, 2,4-dinitroanisole (DNAN), triaminotrinitrobenzene (TATB), 1,1-diamino-2,2-dinitroethene (FOX-7) and nitroguanidine (NQ), and traditional munition compound 2,4,6-trinitrotoluene (TNT) on the surfaces of cellulose, cellulose triacetate, chitin and chitosan biopolymers. Cellulose, cellulose triacetate, chitin and chitosan were modeled as trimeric form of the linear chain of 4 C 1 chair conformation of β-d-glucopyranos, its triacetate form, β-N-acetylglucosamine and D-glucosamine, respectively, in the 1➔4 linkage. Geometries were optimized at the M062X functional level of the density functional theory (DFT) using the 6-31G(d,p) basis set in the gas phase and in the bulk water solution using the conductor-like polarizable continuum model (CPCM) approach. The nature of potential energy surfaces of the optimized geometries were ascertained through the harmonic vibrational frequency analysis. The basis set superposition error (BSSE) corrected interaction energies were obtained using the 6-311G(d,p) basis set at the same theoretical level. The computed BSSE in the gas phase was used to correct interaction energy in the bulk water solution. Computed and experimental results regarding the ability of considered surfaces in adsorbing the insensitive munitions compounds are discussed. Copyright © 2017. Published by Elsevier B.V.

  2. Multivariable optimization of liquid rocket engines using particle swarm algorithms

    NASA Astrophysics Data System (ADS)

    Jones, Daniel Ray

    Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.

  3. Airway disease: anatomopathologic patterns and functional correlations.

    PubMed

    Mormile, F; Ciappi, G

    1997-01-01

    Airways represent a serial and parallel branched system, through which the alveoli are connected with the external air. They participate in the mechanical and immune defense against noxious agents, regional flow regulation to optimize the perfusion/ventilation ratio and provide lung mechanical support. Functional exploration of central airways is based on resistance measurement, flow-volume curve or spirometry, while peripheral airways influence parameters as the upstream resistance, the slope of phase III nitrogen washout and the residual volume. Bronchodynamic tests supply important information on airway reversibility and nonspecific reactivity. Anatomopathologic alterations of obstructive chronic bronchitis, pulmonary emphysema and bronchial asthma account for their specific functional and bronchodynamic alterations. There is a growing interest for bronchiolitis in the clinical, radiologic and functional field. This type of lesion, always present in COPD, asthma and interstitial disease, becomes relevant when isolated or predominant. The most useful anatomofunctional classification separates the "constrictive" forms, the cause of obstruction and hyperinflation, from "proliferative" forms where an intraluminal proliferation more or less extended to alveolar air spaces as in BOOP (bronchiolitis obliterans organizing pneumonia) results in restrictive dysfunction. Constrictive bronchiolitis obliterans represents a severe and frequent complication of lung and bone marrow transplantation. Idiopathic BOOP may occur with cough or flue-like symptoms. In other cases, constrictive and proliferative forms may have a toxic (gases or drugs), postinfective or immune etiology (rheumatoid arthritis, LES, etc). Respiratory bronchiolitis or smokers' bronchiolitis, an often asymptomatic lesion, rarely associated to an interstitial lung disease, should be considered separately. The relationships between respiratory bronchiolitis, COPD and initial centriacinar emphysema is still to be elucidated. The diagnostic combination of the more sensitive functional tests with HRCT will allow a better understanding of the natural history of the various forms of bronchiolitis.

  4. The multifacet graphically contracted function method. I. Formulation and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, Ron; Brozell, Scott R.; Gidofalvi, Gergely

    2014-08-14

    The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that bothmore » the energy and the gradient computation scale as O(N{sup 2}n{sup 4}) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N{sub 2} dissociation, cubic H{sub 8} dissociation, the symmetric dissociation of H{sub 2}O, and the insertion of Be into H{sub 2}. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.« less

  5. Optimization of preparation of activated carbon from cotton stalk by microwave assisted phosphoric acid-chemical activation.

    PubMed

    Deng, Hui; Zhang, Genlin; Xu, Xiaolin; Tao, Guanghui; Dai, Jiulei

    2010-10-15

    The preparation of activated carbon (AC) from cotton stalk was investigated in this paper. Orthogonal array experimental design method was used to optimize the preparation of AC using microwave assisted phosphoric acid. Optimized parameters were radiation power of 400 W, radiation time of 8 min, concentration of phosphoric acid of 50% by volume and impregnation time of 20 h, respectively. The surface characteristics of the AC prepared under optimized condition were examined by pore structure analysis, scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FT-IR). Pore structure analysis shows that mecropores constitute more of the porosity of the prepared AC. Compared to cotton stalk, different functionalities and morphology on the carbon surfaces were formed in the prepared process. The adsorption capacity of the AC was also investigated by removing methylene blue (MB) in aqueous solution. The equilibrium data of the adsorption was well fitted to the Langmuir isotherm. The maximum adsorption capacity of MB on the prepared AC is 245.70 mg/g. The adsorption process follows the pseudo-second-order kinetic model. 2010 Elsevier B.V. All rights reserved.

  6. Low-leakage and low-instability labyrinth seal

    NASA Technical Reports Server (NTRS)

    Rhode, David L. (Inventor)

    1997-01-01

    Improved labyrinth seal designs are disclosed. The present invention relates to labyrinth seal systems with selected sealing surfaces and seal geometry to optimize flow deflection and produce maximum turbulent action. Optimum seal performance is generally accomplished by providing sealing surfaces and fluid cavities formed to dissipate fluid energy as a function of the geometry of the sealing surfaces along with the position and size of the fluid cavities formed between members of the labyrinth seal system. Improved convex surfaces, annular flow reversal grooves, flow deflection blocks and rough, machined surfaces cooperate to enhance the performance of the labyrinth seal systems. For some labyrinth seal systems a mid-cavity throttle and either rigid teeth or flexible spring teeth may be included.

  7. Fuzzy rationality and parameter elicitation in decision analysis

    NASA Astrophysics Data System (ADS)

    Nikolova, Natalia D.; Tenekedjiev, Kiril I.

    2010-07-01

    It is widely recognised by decision analysts that real decision-makers always make estimates in an interval form. An overview of techniques to find an optimal alternative among such with imprecise and interval probabilities is presented. Scalarisation methods are outlined as most appropriate. A proper continuation of such techniques is fuzzy rational (FR) decision analysis. A detailed representation of the elicitation process influenced by fuzzy rationality is given. The interval character of probabilities leads to the introduction of ribbon functions, whose general form and special cases are compared with the p-boxes. As demonstrated, approximation of utilities in FR decision analysis does not depend on the probabilities, but the approximation of probabilities is dependent on preferences.

  8. Crystal structure of methylprednisolone acetate form II, C 24H 32O 6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheatley, Austin M.; Kaduk, James A.; Gindhart, Amy M.

    The crystal structure of methylprednisolone acetate form II, C 24H 32O 6, has been solved and refined using synchrotron X-ray powder diffraction data, and optimized using density functional techniques. Methylprednisolone acetate crystallizes in space groupP2 12 12 1(#19) witha= 8.17608(2),b= 9.67944(3),c= 26.35176(6) Å,V= 2085.474(6) Å 3, andZ= 4. Both hydroxyl groups act as hydrogen bond donors, resulting in a two-dimensional hydrogen bond network in theabplane. C–H…O hydrogen bonds also contribute to the crystal energy. The powder pattern is included in the Powder Diffraction File™ as entry 00-065-1412.

  9. A Weak Galerkin Method for the Reissner–Mindlin Plate in Primary Form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    We developed a new finite element method for the Reissner–Mindlin equations in its primary form by using the weak Galerkin approach. Like other weak Galerkin finite element methods, this one is highly flexible and robust by allowing the use of discontinuous approximating functions on arbitrary shape of polygons and, at the same time, is parameter independent on its stability and convergence. Furthermore, error estimates of optimal order in mesh size h are established for the corresponding weak Galerkin approximations. Numerical experiments are conducted for verifying the convergence theory, as well as suggesting some superconvergence and a uniform convergence of themore » method with respect to the plate thickness.« less

  10. A Weak Galerkin Method for the Reissner–Mindlin Plate in Primary Form

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu

    2017-10-04

    We developed a new finite element method for the Reissner–Mindlin equations in its primary form by using the weak Galerkin approach. Like other weak Galerkin finite element methods, this one is highly flexible and robust by allowing the use of discontinuous approximating functions on arbitrary shape of polygons and, at the same time, is parameter independent on its stability and convergence. Furthermore, error estimates of optimal order in mesh size h are established for the corresponding weak Galerkin approximations. Numerical experiments are conducted for verifying the convergence theory, as well as suggesting some superconvergence and a uniform convergence of themore » method with respect to the plate thickness.« less

  11. Protofit: A program for determining surface protonation constants from titration data

    NASA Astrophysics Data System (ADS)

    Turner, Benjamin F.; Fein, Jeremy B.

    2006-11-01

    Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.

  12. The Function of Gas Vesicles in Halophilic Archaeaand Bacteria: Theories and Experimental Evidence

    PubMed Central

    Oren, Aharon

    2012-01-01

    A few extremely halophilic Archaea (Halobacterium salinarum, Haloquadratum walsbyi, Haloferax mediterranei, Halorubrum vacuolatum, Halogeometricum borinquense, Haloplanus spp.) possess gas vesicles that bestow buoyancy on the cells. Gas vesicles are also produced by the anaerobic endospore-forming halophilic Bacteria Sporohalobacter lortetii and Orenia sivashensis. We have extensive information on the properties of gas vesicles in Hbt. salinarum and Hfx. mediterranei and the regulation of their formation. Different functions were suggested for gas vesicle synthesis: buoying cells towards oxygen-rich surface layers in hypersaline water bodies to prevent oxygen limitation, reaching higher light intensities for the light-driven proton pump bacteriorhodopsin, positioning the cells optimally for light absorption, light shielding, reducing the cytoplasmic volume leading to a higher surface-area-to-volume ratio (for the Archaea) and dispersal of endospores (for the anaerobic spore-forming Bacteria). Except for Hqr. walsbyi which abounds in saltern crystallizer brines, gas-vacuolate halophiles are not among the dominant life forms in hypersaline environments. There only has been little research on gas vesicles in natural communities of halophilic microorganisms, and the few existing studies failed to provide clear evidence for their possible function. This paper summarizes the current status of the different theories why gas vesicles may provide a selective advantage to some halophilic microorganisms. PMID:25371329

  13. Redox-linked Conformational Dynamics in Apoptosis Inducing Factor

    PubMed Central

    Sevrioukova, Irina F.

    2009-01-01

    Apoptosis inducing factor (AIF) is a bifunctional mitochondrial flavoprotein critical for energy metabolism and induction of caspase-independent apoptosis, whose exact role in normal mitochondria remains unknown. Upon reduction with NADH, AIF undergoes dimerization and forms tight, long-lived FADH2-NAD charge-transfer complexes (CTC) proposed to be functionally important. To get a deeper insight into structure/function relations and redox mechanism of this vitally important protein, we determined the x-ray structures of oxidized and NADH-reduced forms of naturally folded recombinant murine AIF. Our structures reveal that CTC with the pyridine nucleotide is stabilized by (i) π-stacking interactions between coplanar nicotinamide, isoalloxazine and Phe309 rings, (ii) rearrangement of multiple aromatic residues in the C-terminal domain, likely serving as an electron delocalization site, and (iii) an extensive hydrogen-bonding network involving His453, a key residue undergoing a conformational switch to directly interact and orient the nicotinamide in position optimal for charge transfer. Via the His453-containing peptide, redox changes in the active site are transmitted to the surface, promoting AIF dimerization and restricting access to a primary nuclear localization signal through which the apoptogenic form is transported to the nucleus. Structural findings agree with the biochemical data and support the hypothesis that both normal and apoptogenic functions of AIF are controlled by NADH. PMID:19447115

  14. Kinetic energy partition method applied to ground state helium-like atoms.

    PubMed

    Chen, Yu-Hsin; Chao, Sheng D

    2017-03-28

    We have used the recently developed kinetic energy partition (KEP) method to solve the quantum eigenvalue problems for helium-like atoms and obtain precise ground state energies and wave-functions. The key to treating properly the electron-electron (repulsive) Coulomb potential energies for the KEP method to be applied is to introduce a "negative mass" term into the partitioned kinetic energy. A Hartree-like product wave-function from the subsystem wave-functions is used to form the initial trial function, and the variational search for the optimized adiabatic parameters leads to a precise ground state energy. This new approach sheds new light on the all-important problem of solving many-electron Schrödinger equations and hopefully opens a new way to predictive quantum chemistry. The results presented here give very promising evidence that an effective one-electron model can be used to represent a many-electron system, in the spirit of density functional theory.

  15. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  16. Functionalized core-shell hydrogel microsprings by anisotropic gelation with bevel-tip capillary

    PubMed Central

    Yoshida, Koki; Onoe, Hiroaki

    2017-01-01

    This study describes a novel microfluidic-based method for the synthesis of hydrogel microsprings that are capable of encapsulating various functional materials. A continuous flow of alginate pre-gel solution can spontaneously form a hydrogel microspring by anisotropic gelation around the bevel-tip of the capillary. This technique allows fabrication of hydrogel microsprings using only simple capillaries and syringe pumps, while their complex compartmentalization characterized by a laminar flow inside the capillary can contribute to the optimization of the microspring internal structure and functionality. Encapsulation of several functional materials including magnetic-responsive nanoparticles or cell dispersed collagen for tissue scaffold was demonstrated to functionalize the microsprings. Our core-shell hydrogel microsprings have immense potential for application in a number of fields, including biological/chemical microsensors, biocompatible soft robots/microactuators, drug release, self-assembly of 3D structures and tissue engineering. PMID:28378803

  17. Estimation procedure of the efficiency of the heat network segment

    NASA Astrophysics Data System (ADS)

    Polivoda, F. A.; Sokolovskii, R. I.; Vladimirov, M. A.; Shcherbakov, V. P.; Shatrov, L. A.

    2017-07-01

    An extensive city heat network contains many segments, and each segment operates with different efficiency of heat energy transfer. This work proposes an original technical approach; it involves the evaluation of the energy efficiency function of the heat network segment and interpreting of two hyperbolic functions in the form of the transcendental equation. In point of fact, the problem of the efficiency change of the heat network depending on the ambient temperature was studied. Criteria dependences used for evaluation of the set segment efficiency of the heat network and finding of the parameters for the most optimal control of the heat supply process of the remote users were inferred with the help of the functional analysis methods. Generally, the efficiency function of the heat network segment is interpreted by the multidimensional surface, which allows illustrating it graphically. It was shown that the solution of the inverse problem is possible as well. Required consumption of the heating agent and its temperature may be found by the set segment efficient and ambient temperature; requirements to heat insulation and pipe diameters may be formulated as well. Calculation results were received in a strict analytical form, which allows investigating the found functional dependences for availability of the extremums (maximums) under the set external parameters. A conclusion was made that it is expedient to apply this calculation procedure in two practically important cases: for the already made (built) network, when the change of the heat agent consumption and temperatures in the pipe is only possible, and for the projecting (under construction) network, when introduction of changes into the material parameters of the network is possible. This procedure allows clarifying diameter and length of the pipes, types of insulation, etc. Length of the pipes may be considered as the independent parameter for calculations; optimization of this parameter is made in accordance with other, economical, criteria for the specific project.

  18. Radiation Dose Assessments of Solar Particle Events with Spectral Representation at High Energies for the Improvement of Radiation Protection

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Atwell, William; Tylka, Allan J.; Dietrich, William F.; Cucinotta, Francis A.

    2010-01-01

    For radiation dose assessments of major solar particle events (SPEs), spectral functional forms of SPEs have been made by fitting available satellite measurements up to approx.100 MeV. However, very high-energy protons (above 500 MeV) have been observed with neutron monitors (NMs) in ground level enhancements (GLEs), which generally present the most severe radiation hazards to astronauts. Due to technical difficulties in converting NM data into absolutely normalized fluence measurements, those functional forms were made with little or no use of NM data. A new analysis of NM data has found that a double power law in rigidity (the so-called Band function) generally provides a satisfactory representation of the combined satellite and NM data from approx.10 MeV to approx.10 GeV in major SPEs (Tylka & Dietrich 2009). We use the Band function fits to re-assess human exposures from large SPEs. Using different spectral representations of large SPEs, variations of exposure levels were compared. The results can be applied to the development of approaches of improved radiation protection for astronauts, as well as the optimization of mission planning and shielding for future space missions.

  19. Use of density functional theory method to calculate structures of neutral carbon clusters C{sub n} (3 ≤ n ≤ 24) and study their variability of structural forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yen, T. W.; Lai, S. K., E-mail: sklai@coll.phy.ncu.edu.tw

    2015-02-28

    In this work, we present modifications to the well-known basin hopping (BH) optimization algorithm [D. J. Wales and J. P. Doye, J. Phys. Chem. A 101, 5111 (1997)] by incorporating in it the unique and specific nature of interactions among valence electrons and ions in carbon atoms through calculating the cluster’s total energy by the density functional tight-binding (DFTB) theory, using it to find the lowest energy structures of carbon clusters and, from these optimized atomic and electronic structures, studying their varied forms of topological transitions, which include a linear chain, a monocyclic to a polycyclic ring, and a fullerene/cage-likemore » geometry. In this modified BH (MBH) algorithm, we define a spatial volume within which the cluster’s lowest energy structure is to be searched, and introduce in addition a cut-and-splice genetic operator to increase the searching performance of the energy minimum than the original BH technique. The present MBH/DFTB algorithm is, therefore, characteristically distinguishable from the original BH technique commonly applied to nonmetallic and metallic clusters, technically more thorough and natural in describing the intricate couplings between valence electrons and ions in a carbon cluster, and thus theoretically sound in putting these two charged components on an equal footing. The proposed modified minimization algorithm should be more appropriate, accurate, and precise in the description of a carbon cluster. We evaluate the present algorithm, its energy-minimum searching in particular, by its optimization robustness. Specifically, we first check the MBH/DFTB technique for two representative carbon clusters of larger size, i.e., C{sub 60} and C{sub 72} against the popular cut-and-splice approach [D. M. Deaven and K. M. Ho, Phys. Rev. Lett. 75, 288 (1995)] that normally is combined with the genetic algorithm method for finding the cluster’s energy minimum, before employing it to investigate carbon clusters in the size range C{sub 3}-C{sub 24} studying their topological transitions. An effort was also made to compare our MBH/DFTB and its re-optimized results carried out by full density functional theory (DFT) calculations with some early DFT-based studies.« less

  20. Prehabilitation to enhance postoperative recovery for an octogenarian following robotic-assisted hysterectomy with endometrial cancer.

    PubMed

    Carli, Franco; Brown, Russell; Kennepohl, Stephan

    2012-08-01

    Postoperative complications represent a major concern for elderly patients. We report a case of a medically complex and frail 88-yr-old woman with endometrial cancer who was scheduled for a robotic-assisted total abdominal hysterectomy. In addition to her cardiac morbidity she presented with several risk factors for neurocognitive decline, including prior episodes of postoperative delirium. The patient underwent functional, nutritional, and neuropsychological assessments prior to a three-week prehabilitation home-based program consisting of strength and endurance exercises as well as nutritional optimization. Remarkably, there were no episodes of postoperative confusion, and over the following eight weeks, she continued to show sustained improvement in exercise tolerance (as per the six-minute walk test), cognitive function (as per the Repeatable Battery for the Assessment of Neuropsychological Status), and overall functional capacity (Short Form-36). This report provides suggestive evidence that a prehabilitation program optimized the health of this elderly patient and may have prevented a further episode of postoperative delirium. Prehabilitation protocols should be evaluated in clinical trials to evaluate their efficacy and the target populations who may benefit and to elucidate the underlying mechanisms responsible for enhanced recovery in the perioperative setting.

  1. Stomatin-Like Protein 2 Binds Cardiolipin and Regulates Mitochondrial Biogenesis and Function▿

    PubMed Central

    Christie, Darah A.; Lemke, Caitlin D.; Elias, Isaac M.; Chau, Luan A.; Kirchhof, Mark G.; Li, Bo; Ball, Eric H.; Dunn, Stanley D.; Hatch, Grant M.; Madrenas, Joaquín

    2011-01-01

    Stomatin-like protein 2 (SLP-2) is a widely expressed mitochondrial inner membrane protein of unknown function. Here we show that human SLP-2 interacts with prohibitin-1 and -2 and binds to the mitochondrial membrane phospholipid cardiolipin. Upregulation of SLP-2 expression increases cardiolipin content and the formation of metabolically active mitochondrial membranes and induces mitochondrial biogenesis. In human T lymphocytes, these events correlate with increased complex I and II activities, increased intracellular ATP stores, and increased resistance to apoptosis through the intrinsic pathway, ultimately enhancing cellular responses. We propose that the function of SLP-2 is to recruit prohibitins to cardiolipin to form cardiolipin-enriched microdomains in which electron transport complexes are optimally assembled. Likely through the prohibitin functional interactome, SLP-2 then regulates mitochondrial biogenesis and function. PMID:21746876

  2. A new look at the simultaneous analysis and design of structures

    NASA Technical Reports Server (NTRS)

    Striz, Alfred G.

    1994-01-01

    The minimum weight optimization of structural systems, subject to strength and displacement constraints as well as size side constraints, was investigated by the Simultaneous ANalysis and Design (SAND) approach. As an optimizer, the code NPSOL was used which is based on a sequential quadratic programming (SQP) algorithm. The structures were modeled by the finite element method. The finite element related input to NPSOL was automatically generated from the input decks of such standard FEM/optimization codes as NASTRAN or ASTROS, with the stiffness matrices, at present, extracted from the FEM code ANALYZE. In order to avoid ill-conditioned matrices that can be encountered when the global stiffness equations are used as additional nonlinear equality constraints in the SAND approach (with the displacements as additional variables), the matrix displacement method was applied. In this approach, the element stiffness equations are used as constraints instead of the global stiffness equations, in conjunction with the nodal force equilibrium equations. This approach adds the element forces as variables to the system. Since, for complex structures and the associated large and very sparce matrices, the execution times of the optimization code became excessive due to the large number of required constraint gradient evaluations, the Kreisselmeier-Steinhauser function approach was used to decrease the computational effort by reducing the nonlinear equality constraint system to essentially a single combined constraint equation. As the linear equality and inequality constraints require much less computational effort to evaluate, they were kept in their previous form to limit the complexity of the KS function evaluation. To date, the standard three-bar, ten-bar, and 72-bar trusses have been tested. For the standard SAND approach, correct results were obtained for all three trusses although convergence became slower for the 72-bar truss. When the matrix displacement method was used, correct results were still obtained, but the execution times became excessive due to the large number of constraint gradient evaluations required. Using the KS function, the computational effort dropped, but the optimization seemed to become less robust. The investigation of this phenomenon is continuing. As an alternate approach, the code MINOS for the optimization of sparse matrices can be applied to the problem in lieu of the Kreisselmeier-Steinhauser function. This investigation is underway.

  3. Generation of noncircular gears for variable motion of the crank-slider mechanism

    NASA Astrophysics Data System (ADS)

    Niculescu, M.; Andrei, L.; Cristescu, A.

    2016-08-01

    The paper proposes a modified kinematics for the crank-slider mechanism of a nails machine. The variable rotational motion of the driven gear allows to slow down the velocity of the slider in the head forming phase and increases the period for the forming forces to be applied, improving the quality of the final product. The noncircular gears are designed based on a hybrid function for the gear transmission ratio whose parameters enable multiple variations of the noncircular driven gears and crack-slider mechanism kinematics, respectively. The AutoCAD graphical and programming facilities are used (i) to analyse and optimize the slider-crank mechanism output functions, in correlation with the predefined noncircular gears transmission ratio, (ii) to generate the noncircular centrodes using the kinematics hypothesis, (iii) to generate the variable geometry of the gear teeth profiles, based on the rolling method, and (iv) to produce the gears solid virtual models. The study highlights the benefits/limits that the noncircular gears transmission ratio defining hybrid functions have on both crank-slider mechanism kinematics and gears geometry.

  4. Mapping the Chevallier-Polarski-Linder parametrization onto physical dark energy Models

    NASA Astrophysics Data System (ADS)

    Scherrer, Robert J.

    2015-08-01

    We examine the Chevallier-Polarski-Linder (CPL) parametrization, in the context of quintessence and barotropic dark energy models, to determine the subset of such models to which it can provide a good fit. The CPL parametrization gives the equation of state parameter w for the dark energy as a linear function of the scale factor a , namely w =w0+wa(1 -a ). In the case of quintessence models, we find that over most of the w0, wa parameter space the CPL parametrization maps onto a fairly narrow form of behavior for the potential V (ϕ ), while a one-dimensional subset of parameter space, for which wa=κ (1 +w0) , with κ constant, corresponds to a wide range of functional forms for V (ϕ ). For barotropic models, we show that the functional dependence of the pressure on the density, up to a multiplicative constant, depends only on wi=wa+w0 and not on w0 and wa separately. Our results suggest that the CPL parametrization may not be optimal for testing either type of model.

  5. The speciation of the proteome

    PubMed Central

    Jungblut, Peter R; Holzhütter, Hermann G; Apweiler, Rolf; Schlüter, Hartmut

    2008-01-01

    Introduction In proteomics a paradox situation developed in the last years. At one side it is basic knowledge that proteins are post-translationally modified and occur in different isoforms. At the other side the protein expression concept disclaims post-translational modifications by connecting protein names directly with function. Discussion Optimal proteome coverage is today reached by bottom-up liquid chromatography/mass spectrometry. But quantification at the peptide level in shotgun or bottom-up approaches by liquid chromatography and mass spectrometry is completely ignoring that a special peptide may exist in an unmodified form and in several-fold modified forms. The acceptance of the protein species concept is a basic prerequisite for meaningful quantitative analyses in functional proteomics. In discovery approaches only top-down analyses, separating the protein species before digestion, identification and quantification by two-dimensional gel electrophoresis or protein liquid chromatography, allow the correlation between changes of a biological situation and function. Conclusion To obtain biological relevant information kinetics and systems biology have to be performed at the protein species level, which is the major challenge in proteomics today. PMID:18638390

  6. Stomatin-Like Protein 2 Is Required for In Vivo Mitochondrial Respiratory Chain Supercomplex Formation and Optimal Cell Function

    PubMed Central

    Mitsopoulos, Panagiotis; Chang, Yu-Han; Wai, Timothy; König, Tim; Dunn, Stanley D.; Langer, Thomas

    2015-01-01

    Stomatin-like protein 2 (SLP-2) is a mainly mitochondrial protein that is widely expressed and is highly conserved across evolution. We have previously shown that SLP-2 binds the mitochondrial lipid cardiolipin and interacts with prohibitin-1 and -2 to form specialized membrane microdomains in the mitochondrial inner membrane, which are associated with optimal mitochondrial respiration. To determine how SLP-2 functions, we performed bioenergetic analysis of primary T cells from T cell-selective Slp-2 knockout mice under conditions that forced energy production to come almost exclusively from oxidative phosphorylation. These cells had a phenotype characterized by increased uncoupled mitochondrial respiration and decreased mitochondrial membrane potential. Since formation of mitochondrial respiratory chain supercomplexes (RCS) may correlate with more efficient electron transfer during oxidative phosphorylation, we hypothesized that the defect in mitochondrial respiration in SLP-2-deficient T cells was due to deficient RCS formation. We found that in the absence of SLP-2, T cells had decreased levels and activities of complex I-III2 and I-III2-IV1-3 RCS but no defects in assembly of individual respiratory complexes. Impaired RCS formation in SLP-2-deficient T cells correlated with significantly delayed T cell proliferation in response to activation under conditions of limiting glycolysis. Altogether, our findings identify SLP-2 as a key regulator of the formation of RCS in vivo and show that these supercomplexes are required for optimal cell function. PMID:25776552

  7. Welfare implications of energy and environmental policies: A general equilibrium approach

    NASA Astrophysics Data System (ADS)

    Iqbal, Mohammad Qamar

    Government intervention and implementation of policies can impose a financial and social cost. To achieve a desired goal there could be several different alternative policies or routes, and government would like to choose the one which imposes the least social costs or/and generates greater social benefits. Therefore, applied welfare economics plays a vital role in public decision making. This paper recasts welfare measure such as equivalent variation, in terms of the prices of factors of production rather than product prices. This is made possible by using duality theory within a general equilibrium framework and by deriving alternative forms of indirect utility functions and expenditure functions in factor prices. Not only we are able to recast existing welfare measures in factor prices, we are able to perform a true cost-benefit analysis of government policies using comparative static analysis of different equilibria and breaking up monetary measure of welfare change such as equivalent variation into its components. A further advantage of our research is demonstrated by incorporating externalities and public goods in the utility function. It is interesting that under a general equilibrium framework optimal income tax tends to reduce inequalities. Results show that imposition of taxes at socially optimal rates brings a net gain to the society. It was also seen that even though a pollution tax may reduce GDP, it leads to an increase in the welfare of the society if it is imposed at an optimal rate.

  8. Optimal swimming of a sheet.

    PubMed

    Montenegro-Johnson, Thomas D; Lauga, Eric

    2014-06-01

    Propulsion at microscopic scales is often achieved through propagating traveling waves along hairlike organelles called flagella. Taylor's two-dimensional swimming sheet model is frequently used to provide insight into problems of flagellar propulsion. We derive numerically the large-amplitude wave form of the two-dimensional swimming sheet that yields optimum hydrodynamic efficiency: the ratio of the squared swimming speed to the rate-of-working of the sheet against the fluid. Using the boundary element method, we show that the optimal wave form is a front-back symmetric regularized cusp that is 25% more efficient than the optimal sine wave. This optimal two-dimensional shape is smooth, qualitatively different from the kinked form of Lighthill's optimal three-dimensional flagellum, not predicted by small-amplitude theory, and different from the smooth circular-arc-like shape of active elastic filaments.

  9. Free-form Airfoil Shape Optimization Under Uncertainty Using Maximum Expected Value and Second-order Second-moment Strategies

    NASA Technical Reports Server (NTRS)

    Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.

  10. Dysfunctional visual word form processing in progressive alexia

    PubMed Central

    Rising, Kindle; Stib, Matthew T.; Rapcsak, Steven Z.; Beeson, Pélagie M.

    2013-01-01

    Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the ‘visual word form area’. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy. PMID:23471694

  11. Dysfunctional visual word form processing in progressive alexia.

    PubMed

    Wilson, Stephen M; Rising, Kindle; Stib, Matthew T; Rapcsak, Steven Z; Beeson, Pélagie M

    2013-04-01

    Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the 'visual word form area'. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy.

  12. Reflectance analysis of porosity gradient in nanostructured silicon layers

    NASA Astrophysics Data System (ADS)

    Jurečka, Stanislav; Imamura, Kentaro; Matsumoto, Taketoshi; Kobayashi, Hikaru

    2017-12-01

    In this work we study optical properties of nanostructured layers formed on silicon surface. Nanostructured layers on Si are formed in order to reach high suppression of the light reflectance. Low spectral reflectance is important for improvement of the conversion efficiency of solar cells and for other optoelectronic applications. Effective method of forming nanostructured layers with ultralow reflectance in a broad interval of wavelengths is in our approach based on metal assisted etching of Si. Si surface immersed in HF and H2O2 solution is etched in contact with the Pt mesh roller and the structure of the mesh is transferred on the etched surface. During this etching procedure the layer density evolves gradually and the spectral reflectance decreases exponentially with the depth in porous layer. We analyzed properties of the layer porosity by incorporating the porosity gradient into construction of the layer spectral reflectance theoretical model. Analyzed layer is splitted into 20 sublayers in our approach. Complex dielectric function in each sublayer is computed by using Bruggeman effective media theory and the theoretical spectral reflectance of modelled multilayer system is computed by using Abeles matrix formalism. Porosity gradient is extracted from the theoretical reflectance model optimized in comparison to the experimental values. Resulting values of the structure porosity development provide important information for optimization of the technological treatment operations.

  13. Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.

    PubMed

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-27

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.

  14. Towards global optimization with adaptive simulated annealing

    NASA Astrophysics Data System (ADS)

    Forbes, Gregory W.; Jones, Andrew E.

    1991-01-01

    The structure of the simulated annealing algorithm is presented and its rationale is discussed. A unifying heuristic is then introduced which serves as a guide in the design of all of the sub-components of the algorithm. Simply put this heuristic principle states that at every cycle in the algorithm the occupation density should be kept as close as possible to the equilibrium distribution. This heuristic has been used as a guide to develop novel step generation and temperature control methods intended to improve the efficiency of the simulated annealing algorithm. The resulting algorithm has been used in attempts to locate good solutions for one of the lens design problems associated with this conference viz. the " monochromatic quartet" and a sample of the results is presented. 1 Global optimization in the context oflens design Whatever the context optimization algorithms relate to problems that take the following form: Given some configuration space with coordinates r (x1 . . x) and a merit function written asffr) find the point r whereftr) takes it lowest value. That is find the global minimum. In many cases there is also a set of auxiliary constraints that must be met so the problem statement becomes: Find the global minimum of the merit function within the region defined by E. (r) 0 j 1 2 . . . p and 0 j 1 2 . . . q.

  15. 3D reconstruction of laser projective point with projection invariant generated from five points on 2D target.

    PubMed

    Xu, Guan; Yuan, Jing; Li, Xiaotao; Su, Jian

    2017-08-01

    Vision measurement on the basis of structured light plays a significant role in the optical inspection research. The 2D target fixed with a line laser projector is designed to realize the transformations among the world coordinate system, the camera coordinate system and the image coordinate system. The laser projective point and five non-collinear points that are randomly selected from the target are adopted to construct a projection invariant. The closed form solutions of the 3D laser points are solved by the homogeneous linear equations generated from the projection invariants. The optimization function is created by the parameterized re-projection errors of the laser points and the target points in the image coordinate system. Furthermore, the nonlinear optimization solutions of the world coordinates of the projection points, the camera parameters and the lens distortion coefficients are contributed by minimizing the optimization function. The accuracy of the 3D reconstruction is evaluated by comparing the displacements of the reconstructed laser points with the actual displacements. The effects of the image quantity, the lens distortion and the noises are investigated in the experiments, which demonstrate that the reconstruction approach is effective to contribute the accurate test in the measurement system.

  16. Optimal Path Determination for Flying Vehicle to Search an Object

    NASA Astrophysics Data System (ADS)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  17. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.

  18. Preparation and characterization of a novel conformed bipolymer paclitaxel-nanoparticle using tea polysaccharides and zein.

    PubMed

    Li, Shuqin; Wang, Xiuming; Li, Weiwei; Yuan, Guoqi; Pan, Yuxiang; Chen, Haixia

    2016-08-01

    To improve the aqueous solubility of the anticancer agent paclitaxel (PTX), a newly conformed bipolymer paclitaxel-nanoparticle using tea polysaccharide (TPS) and zein was prepared and characterized. Tea polysaccharide was used as a biopolymer shell and zein was as the core and the optimal formula was subjected to the characteristic study by TEM, DSC, FTIR and in vitro release study. Results showed that the optimal particle was acquired with particle yield at 40.01%, drug loading at 0.12% and diameters around 165nm when the concentration of tea polysaccharide was set at 0.2%, and the amount of PTX:zein=1:10. The particle was a nanoparticle with spherical surface and the encapsulated PTX was in an amorphous form rather than cystalline form. PTX was interacted with zein and polysaccharide through O H and CO groups and it had a sustained release. The results suggested that the novel bipolymer might be a promising agent for PTX delivery and tea polysaccharide was demonstrated its function in drug delivery system. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Compositional Models of Glass/Melt Properties and their Use for Glass Formulation

    DOE PAGES

    Vienna, John D.; USA, Richland Washington

    2014-12-18

    Nuclear waste glasses must simultaneously meet a number of criteria related to their processability, product quality, and cost factors. The properties that must be controlled in glass formulation and waste vitrification plant operation tend to vary smoothly with composition allowing for glass property-composition models to be developed and used. Models have been fit to the key glass properties. The properties are transformed so that simple functions of composition (e.g., linear, polynomial, or component ratios) can be used as model forms. The model forms are fit to experimental data designed statistically to efficiently cover the composition space of interest. Examples ofmore » these models are found in literature. The glass property-composition models, their uncertainty definitions, property constraints, and optimality criteria are combined to formulate optimal glass compositions, control composition in vitrification plants, and to qualify waste glasses for disposal. An overview of current glass property-composition modeling techniques is summarized in this paper along with an example of how those models are applied to glass formulation and product qualification at the planned Hanford high-level waste vitrification plant.« less

  20. [Conceptual approach to formation of a modern system of medical provision].

    PubMed

    Belevitin, A B; Miroshnichenko, Iu V; Bunin, S A; Goriachev, A B; Krasavin, K D

    2009-09-01

    Within the frame of forming of a new face of medical service of the Armed Forces, were determined the principle approaches to optimization of the process of development of the system of medical supply. It was proposed to use the following principles: principle of hierarchic structuring, principle of purposeful orientation, principle of vertical task sharing, principle of horizontal task sharing, principle of complex simulation, principle of permanent perfection. The main direction of optimization of structure and composition of system of medical supply of the Armed Forces are: forming of modern institutes of medical supply--centers of support by technique and facilities on the base of central, regional storehouses, and attachment of several functions of organs of military government to them; creation of medical supply office on the base military hospitals, being basing treatment-prophylaxis institutes, in adjusted territorial zones of responsibility for the purpose of realization of complex of tasks of supplying the units and institutes, attached to them on medical support, by medical equipment. Building of medical support system is realized on three levels: Center - Military region (NAVY region) - territorial zone of responsibility.

  1. Employing general fit-bases for construction of potential energy surfaces with an adaptive density-guided approach

    NASA Astrophysics Data System (ADS)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove

    2018-02-01

    We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.

  2. MN15-L: A New Local Exchange-Correlation Functional for Kohn-Sham Density Functional Theory with Broad Accuracy for Atoms, Molecules, and Solids.

    PubMed

    Yu, Haoyu S; He, Xiao; Truhlar, Donald G

    2016-03-08

    Kohn-Sham density functional theory is widely used for applications of electronic structure theory in chemistry, materials science, and condensed-matter physics, but the accuracy depends on the quality of the exchange-correlation functional. Here, we present a new local exchange-correlation functional called MN15-L that predicts accurate results for a broad range of molecular and solid-state properties including main-group bond energies, transition metal bond energies, reaction barrier heights, noncovalent interactions, atomic excitation energies, ionization potentials, electron affinities, total atomic energies, hydrocarbon thermochemistry, and lattice constants of solids. The MN15-L functional has the same mathematical form as a previous meta-nonseparable gradient approximation exchange-correlation functional, MN12-L, but it is improved because we optimized it against a larger database, designated 2015A, and included smoothness restraints; the optimization has a much better representation of transition metals. The mean unsigned error on 422 chemical energies is 2.32 kcal/mol, which is the best among all tested functionals, with or without nonlocal exchange. The MN15-L functional also provides good results for test sets that are outside the training set. A key issue is that the functional is local (no nonlocal exchange or nonlocal correlation), which makes it relatively economical for treating large and complex systems and solids. Another key advantage is that medium-range correlation energy is built in so that one does not need to add damped dispersion by molecular mechanics in order to predict accurate noncovalent binding energies. We believe that the MN15-L functional should be useful for a wide variety of applications in chemistry, physics, materials science, and molecular biology.

  3. [THE OPTIMIZATION OF NUTRITION FUNCTION UNDER SYNDROME OF RESISTANCE TO INSULIN, DISORDER OF FATTY ACIDS' METABOLISM AND ABSORPTION OF GLUCOSE BY CELLS (A LECTURE)].

    PubMed

    Titov, V N

    2016-01-01

    The phylogenetic processes continue to proceed in Homo Sapiens. At the very early stages ofphylogenesis, the ancient Archaea that formed mitochondria under symbiotic interaction with later bacterial cells conjointly formed yet another system. In this system, there are no cells' absorption of glucose if it is possible to absorb fatty acids from intercellular medium in the form of unesterfied fatty acids or ketonic bodies--metabolites of fatty acids. This is caused by objectively existed conditions and subsequent availability of substrates at the stages ofphylogenesis: acetate, ketonic bodies, fatty acids and only later glucose. The phylogenetically late insulin used after billions years the same dependencies at formation of regulation ofmetabolism offatty acids and cells' absorption of glucose. In order that syndrome ofresistance ceased to exist as afoundation of metabolic pandemic Homo Sapiens has to understand the following. After successful function ofArchaea+bacterial cells and considered by biology action of insulin for the third time in phylogenesis and using biological function of intelligence the content ofphylogenetically earlier palmitic saturated fatty acid infood can't to exceed possibilities of phylogenetically late lipoproteins to transfer it in intercellular medium and blood and cells to absorb it. It is supposed that at early stages of phylogenesis biological function of intelligence is primarily formed to bring into line "unconformities" of regulation of metabolism against the background of seeming relative biological "perfection". These unconformities were subsequently and separately formed at the level of cells in paracrin regulated cenosises of cells and organs and at the level of organism. The prevention of resistance to insulin basically requires biological function of intelligence, principle of self-restraint, bringing into line multiple desires of Homo Sapiens with much less extensive biological possibilities. The "unconformities" of regulation of metabolism in vivo are etiological factors of all metabolic pandemics including atherosclerosis, metabolic arterial hypertension, obesity and metabolic syndrome Tertiannondatum.

  4. In-hospital rehabilitation after multiple joint procedures of the lower extremities in haemophilia patients: clinical guidelines for physical therapists.

    PubMed

    De Kleijn, P; Fischer, K; Vogely, H Ch; Hendriks, C; Lindeman, E

    2011-11-01

    This project aimed to develop guidelines for use during in-hospital rehabilitation after combinations of multiple joint procedures (MJP) of the lower extremities in persons with haemophilia (PWH). MJP are defined as surgical procedures on the ankles, knees and hips, performed in any combination, staged, or during a single session. MJP that we studied included total knee arthroplasty, total hip arthroplasty and ankle arthrodesis. Literature on rheumatoid arthritis demonstrated promising functional results, fewer hospitalization days and days lost from work. However, the complication rate is higher and rehabilitation needs optimal conditions. Since 1995, at the Van Creveldkliniek, 54 PWH have undergone MJP. During the rehabilitation in our hospital performed by experienced physical therapists, regular guidelines seemed useless. Guidelines will guarantee an optimal physical recovery and maximum benefit from this enormous investment. This will lead to an optimal functional capability and optimal quality of life for this elderly group of PWH. There are no existing guidelines for MJP, in haemophilia, revealed through a review of the literature. Therefore, a working group was formed to develop and implement such guidelines and the procedure is explained. The total group of PWH who underwent MJP is described, subdivided into combinations of joints. For these subgroups, the number of days in hospital, complications and profile at discharge, as well as a guideline on the clinical rehabilitation, are given. It contains a general part and a part for each specific subgroup. © 2011 Blackwell Publishing Ltd.

  5. The short form of the recombinant CAL-A-type lipase UM03410 from the smut fungus Ustilago maydis exhibits an inherent trans-fatty acid selectivity.

    PubMed

    Brundiek, Henrike; Saß, Stefan; Evitt, Andrew; Kourist, Robert; Bornscheuer, Uwe T

    2012-04-01

    The Ustilago maydis lipase UM03410 belongs to the mostly unexplored Candida antarctica lipase (CAL-A) subfamily. The two lipases with [corrected] the highest identity are a lipase from Sporisorium reilianum and the prototypic CAL-A. In contrast to the other CAL-A-type lipases, this hypothetical U. maydis lipase is annotated to possess a prolonged N-terminus of unknown function. Here, we show for the first time the recombinant expression of two versions of lipase UM03410: the full-length form (lipUMf) and an Nterminally truncated form (lipUMs). For comparison to the prototype, the expression of recombinant CAL-A in E. coli was investigated. Although both forms of lipase UM03410 could be expressed functionally in E. coli, the N-terminally truncated form (lipUMs) demonstrated significantly higher activities towards p-nitrophenyl esters. The functional expression of the N-terminally truncated lipase was further optimized by the appropriate choice of the E. coli strain, lowering the cultivation temperature to 20 °C and enrichment of the cultivation medium with glucose. Primary characteristics of the recombinant lipase are its pH optimum in the range of 6.5-7.0 and its temperature optimum at 55 °C. As is typical for lipases, lipUM03410 shows preference for long chain fatty acid esters with myristic acid ester (C14:0 ester) being the most preferred one.More importantly, lipUMs exhibits an inherent preference for C18:1Δ9 trans and C18:1Δ11 trans-fatty acid esters similar to CAL-A. Therefore, the short form of this U. maydis lipase is the only other currently known lipase with a distinct trans-fatty acid selectivity.

  6. Real-time terminal area trajectory planning for runway independent aircraft

    NASA Astrophysics Data System (ADS)

    Xue, Min

    The increasing demand for commercial air transportation results in delays due to traffic queues that form bottlenecks along final approach and departure corridors. In urban areas, it is often infeasible to build new runways, and regardless of automation upgrades traffic must remain separated to avoid the wakes of previous aircraft. Vertical or short takeoff and landing aircraft as Runway Independent Aircraft (RIA) can increase passenger throughput at major urban airports via the use of vertiports or stub runways. The concept of simultaneous non-interfering (SNI) operations has been proposed to reduce traffic delays by creating approach and departure corridors that do not intersect existing fixed-wing routes. However, SNI trajectories open new routes that may overfly noise-sensitive areas, and RIA may generate more noise than traditional jet aircraft, particularly on approach. In this dissertation, we develop efficient SNI noise abatement procedures applicable to RIA. First, we introduce a methodology based on modified approximated cell-decomposition and Dijkstra's search algorithm to optimize longitudinal plane (2-D) RIA trajectories over a cost function that minimizes noise, time, and fuel use. Then, we extend the trajectory optimization model to 3-D with a k-ary tree as the discrete search space. We incorporate geography information system (GIS) data, specifically population, into our objective function, and focus on a practical case study: the design of SNI RIA approach procedures to Baltimore-Washington International airport. Because solutions were represented as trim state sequences, we incorporated smooth transition between segments to enable more realistic cost estimates. Due to the significant computational complexity, we investigated alternative more efficient optimization techniques applicable to our nonlinear, non-convex, heavily constrained, and discontinuous objective function. Comparing genetic algorithm (GA) and adaptive simulated annealing (ASA) with our original Dijkstra's algorithm, ASA is identified as the most efficient algorithm for terminal area trajectory optimization. The effects of design parameter discretization are analyzed, with results indicating a SNI procedure with 3-4 segments effectively balances simplicity with cost minimization. Finally, pilot control commands were implemented and generated via optimization-base inverse simulation to validate execution of the optimal approach trajectories.

  7. A global optimization algorithm inspired in the behavior of selfish herds.

    PubMed

    Fausto, Fernando; Cuevas, Erik; Valdivia, Arturo; González, Adrián

    2017-10-01

    In this paper, a novel swarm optimization algorithm called the Selfish Herd Optimizer (SHO) is proposed for solving global optimization problems. SHO is based on the simulation of the widely observed selfish herd behavior manifested by individuals within a herd of animals subjected to some form of predation risk. In SHO, individuals emulate the predatory interactions between groups of prey and predators by two types of search agents: the members of a selfish herd (the prey) and a pack of hungry predators. Depending on their classification as either a prey or a predator, each individual is conducted by a set of unique evolutionary operators inspired by such prey-predator relationship. These unique traits allow SHO to improve the balance between exploration and exploitation without altering the population size. To illustrate the proficiency and robustness of the proposed method, it is compared to other well-known evolutionary optimization approaches such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Firefly Algorithm (FA), Differential Evolution (DE), Genetic Algorithms (GA), Crow Search Algorithm (CSA), Dragonfly Algorithm (DA), Moth-flame Optimization Algorithm (MOA) and Sine Cosine Algorithm (SCA). The comparison examines several standard benchmark functions, commonly considered within the literature of evolutionary algorithms. The experimental results show the remarkable performance of our proposed approach against those of the other compared methods, and as such SHO is proven to be an excellent alternative to solve global optimization problems. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    PubMed

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  9. Functionalization of ( n, 0) CNTs ( n = 3-16) by uracil: DFT studies

    NASA Astrophysics Data System (ADS)

    Mirzaei, Mahmoud; Harismah, Kun; Jafari, Elham; Gülseren, Oğuz; Rad, Ali Shokuhi

    2018-01-01

    Density functional theory (DFT) calculations were performed to investigate stabilities and properties for uracil (U)-functionalized carbon nanotubes (CNTs). To this aim, the optimized molecular properties were evaluated for ( n, 0) models of CNTs ( n = 3-16) in the original and U-functionalized forms. The results indicated that the dipole moments and energy gaps were independent of tubular diameters whereas the binding energies showed that the U-functionalization could be better achieved for n = 8-11 curvatures of ( n, 0) CNTs. Further studies based on the evaluated atomic-scale properties, including quadrupole coupling constants ( C Q ), indicated that the electronic properties of atoms could detect the effects of diameters variations of ( n, 0) CNTs, in which the effects were very much significant for the atoms around the U-functionalization regions. Finally, the achieved results of singular U, original CNTs, and CNT-U hybrids were compared to each other to demonstrate the stabilities and properties for the U-functionalized ( n, 0) CNTs.

  10. Yeast-expressed recombinant protein of the receptor-binding domain in SARS-CoV spike protein with deglycosylated forms as a SARS vaccine candidate

    PubMed Central

    Chen, Wen-Hsiang; Du, Lanying; Chag, Shivali M; Ma, Cuiqing; Tricoche, Nancy; Tao, Xinrong; Seid, Christopher A; Hudspeth, Elissa M; Lustigman, Sara; Tseng, Chien-Te K; Bottazzi, Maria Elena; Hotez, Peter J; Zhan, Bin; Jiang, Shibo

    2014-01-01

    Development of vaccines for preventing a future pandemic of severe acute respiratory syndrome (SARS) caused by SARS coronavirus (SARS-CoV) and for biodefense preparedness is urgently needed. Our previous studies have shown that a candidate SARS vaccine antigen consisting of the receptor-binding domain (RBD) of SARS-CoV spike protein can induce potent neutralizing antibody responses and protection against SARS-CoV challenge in vaccinated animals. To optimize expression conditions for scale-up production of the RBD vaccine candidate, we hypothesized that this could be potentially achieved by removing glycosylation sites in the RBD protein. In this study, we constructed two RBD protein variants: 1) RBD193-WT (193-aa, residues 318–510) and its deglycosylated forms (RBD193-N1, RBD193-N2, RBD193-N3); 2) RBD219-WT (219-aa, residues 318–536) and its deglycosylated forms (RBD219-N1, RBD219-N2, and RBD219-N3). All constructs were expressed as recombinant proteins in yeast. The purified recombinant proteins of these constructs were compared for their antigenicity, functionality and immunogenicity in mice using alum as the adjuvant. We found that RBD219-N1 exhibited high expression yield, and maintained its antigenicity and functionality. More importantly, RBD219-N1 induced significantly stronger RBD-specific antibody responses and a higher level of neutralizing antibodies in immunized mice than RBD193-WT, RBD193-N1, RBD193-N3, or RBD219-WT. These results suggest that RBD219-N1 could be selected as an optimal SARS vaccine candidate for further development. PMID:24355931

  11. Yeast-expressed recombinant protein of the receptor-binding domain in SARS-CoV spike protein with deglycosylated forms as a SARS vaccine candidate.

    PubMed

    Chen, Wen-Hsiang; Du, Lanying; Chag, Shivali M; Ma, Cuiqing; Tricoche, Nancy; Tao, Xinrong; Seid, Christopher A; Hudspeth, Elissa M; Lustigman, Sara; Tseng, Chien-Te K; Bottazzi, Maria Elena; Hotez, Peter J; Zhan, Bin; Jiang, Shibo

    2014-01-01

    Development of vaccines for preventing a future pandemic of severe acute respiratory syndrome (SARS) caused by SARS coronavirus (SARS-CoV) and for biodefense preparedness is urgently needed. Our previous studies have shown that a candidate SARS vaccine antigen consisting of the receptor-binding domain (RBD) of SARS-CoV spike protein can induce potent neutralizing antibody responses and protection against SARS-CoV challenge in vaccinated animals. To optimize expression conditions for scale-up production of the RBD vaccine candidate, we hypothesized that this could be potentially achieved by removing glycosylation sites in the RBD protein. In this study, we constructed two RBD protein variants: 1) RBD193-WT (193-aa, residues 318-510) and its deglycosylated forms (RBD193-N1, RBD193-N2, RBD193-N3); 2) RBD219-WT (219-aa, residues 318-536) and its deglycosylated forms (RBD219-N1, RBD219-N2, and RBD219-N3). All constructs were expressed as recombinant proteins in yeast. The purified recombinant proteins of these constructs were compared for their antigenicity, functionality and immunogenicity in mice using alum as the adjuvant. We found that RBD219-N1 exhibited high expression yield, and maintained its antigenicity and functionality. More importantly, RBD219-N1 induced significantly stronger RBD-specific antibody responses and a higher level of neutralizing antibodies in immunized mice than RBD193-WT, RBD193-N1, RBD193-N3, or RBD219-WT. These results suggest that RBD219-N1 could be selected as an optimal SARS vaccine candidate for further development.

  12. Optimization of a Tube Hydroforming Process

    NASA Astrophysics Data System (ADS)

    Abedrabbo, Nader; Zafar, Naeem; Averill, Ron; Pourboghrat, Farhang; Sidhu, Ranny

    2004-06-01

    An approach is presented to optimize a tube hydroforming process using a Genetic Algorithm (GA) search method. The goal of the study is to maximize formability by identifying the optimal internal hydraulic pressure and feed rate while satisfying the forming limit diagram (FLD). The optimization software HEEDS is used in combination with the nonlinear structural finite element code LS-DYNA to carry out the investigation. In particular, a sub-region of a circular tube blank is formed into a square die. Compared to the best results of a manual optimization procedure, a 55% increase in expansion was achieved when using the pressure and feed profiles identified by the automated optimization procedure.

  13. Replacement of missing teeth with fiber-reinforced composite FPDs: clinical protocol.

    PubMed

    Bouillaguet, Serge; Schütt, Andrea; Marin, Isabelle; Etechami, Leila; Di Salvo, Giancarlo; Krejci, Ivo

    2003-04-01

    The concept of minimally invasive preparation protocols has resulted in reduced loss of critical tooth structures and maintenance of optimal strength, form, and aesthetics. While various treatment options have been described for single-tooth replacement, fiber-reinforced composite (FRC) fixed partial dentures (FPDs) provide a viable treatment alternative with proven mechanical properties, aesthetics, and function. This article presents several clinical scenarios in which minimally invasive adhesive FRC FPDs are provided to deliver enhanced predictability, strength, and durability.

  14. [Problems and solutions of implementing plan environmental impact assessment in China].

    PubMed

    Liang, Xue-gong; Liu, Juan

    2004-11-01

    At present, there are two forms of Plan Environmental Impact Assessment (PEIA): Self-assessment and entrusted assessment. The appropriate object and the time of starting PEIA were discussed. It points out that self-assessment should be able to realize the essential roles of PEIA, such as 'starting as early as possible', 'optimization of options', etc. At the same time, the concept and function of alternative, PEIA methodology and the role of public participation are all elementarily discussed.

  15. A Universal Rank-Size Law

    PubMed Central

    2016-01-01

    A mere hyperbolic law, like the Zipf’s law power function, is often inadequate to describe rank-size relationships. An alternative theoretical distribution is proposed based on theoretical physics arguments starting from the Yule-Simon distribution. A modeling is proposed leading to a universal form. A theoretical suggestion for the “best (or optimal) distribution”, is provided through an entropy argument. The ranking of areas through the number of cities in various countries and some sport competition ranking serves for the present illustrations. PMID:27812192

  16. Homotopy method for optimization of variable-specific-impulse low-thrust trajectories

    NASA Astrophysics Data System (ADS)

    Chi, Zhemin; Yang, Hongwei; Chen, Shiyu; Li, Junfeng

    2017-11-01

    The homotopy method has been used as a useful tool in solving fuel-optimal trajectories with constant-specific-impulse low thrust. However, the specific impulse is often variable for many practical solar electric power-limited thrusters. This paper investigates the application of the homotopy method for optimization of variable-specific-impulse low-thrust trajectories. Difficulties arise when the two commonly-used homotopy functions are employed for trajectory optimization. The optimal power throttle level and the optimal specific impulse are coupled with the commonly-used quadratic and logarithmic homotopy functions. To overcome these difficulties, a modified logarithmic homotopy function is proposed to serve as a gateway for trajectory optimization, leading to decoupled expressions of both the optimal power throttle level and the optimal specific impulse. The homotopy method based on this homotopy function is proposed. Numerical simulations validate the feasibility and high efficiency of the proposed method.

  17. On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Gonzalo, J.; Domínguez, D.; López, D.

    2014-12-01

    From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a discrete grid at certain time intervals. The research demonstrates advantages and disadvantages of each method as well as performance figures of the solutions found for typical flight conditions under static and dynamic atmospheres. This provides significant parameters to be used in the selection of solvers for optimal trajectories.

  18. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.

  19. Koopman Invariant Subspaces and Finite Linear Representations of Nonlinear Dynamical Systems for Control

    PubMed Central

    Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan

    2016-01-01

    In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740

  20. Automated sample plan selection for OPC modeling

    NASA Astrophysics Data System (ADS)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  1. River landscapes and optimal channel networks.

    PubMed

    Balister, Paul; Balogh, József; Bertuzzo, Enrico; Bollobás, Béla; Caldarelli, Guido; Maritan, Amos; Mastrandrea, Rossana; Morris, Robert; Rinaldo, Andrea

    2018-06-26

    We study tree structures termed optimal channel networks (OCNs) that minimize the total gravitational energy loss in the system, an exact property of steady-state landscape configurations that prove dynamically accessible and strikingly similar to natural forms. Here, we show that every OCN is a so-called natural river tree, in the sense that there exists a height function such that the flow directions are always directed along steepest descent. We also study the natural river trees in an arbitrary graph in terms of forbidden substructures, which we call k-path obstacles, and OCNs on a d-dimensional lattice, improving earlier results by determining the minimum energy up to a constant factor for every [Formula: see text] Results extend our capabilities in environmental statistical mechanics. Copyright © 2018 the Author(s). Published by PNAS.

  2. Progress Report on Optimizing X-ray Optical Prescriptions for Wide-Field Applications

    NASA Technical Reports Server (NTRS)

    Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.; Weisskopf, M. C.

    2011-01-01

    We report on the present status of our continuing efforts to develop a method for optimizing wide-field nested x-ray telescope mirror prescriptions. Utilizing extensive Monte-Carlo ray trace simulations, we find an analytic form for the root-mean-square dispersion of rays from a Wolter I optic on the surface of a flat focal plane detector as a function of detector tilt away from the nominal focal plane and detector displacement along the optical axis. The configuration minimizing the ray dispersion from a nested array of Wolter I telescopes is found by solving a linear system of equations for tilt and individual mirror pair displacement. Finally we outline our initial efforts at expanding this method to include higher order polynomial terms in the mirror prescriptions.

  3. String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images

    NASA Astrophysics Data System (ADS)

    Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo

    2016-11-01

    We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.

  4. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  5. Studying the varied shapes of gold clusters by an elegant optimization algorithm that hybridizes the density functional tight-binding theory and the density functional theory

    NASA Astrophysics Data System (ADS)

    Yen, Tsung-Wen; Lim, Thong-Leng; Yoon, Tiem-Leong; Lai, S. K.

    2017-11-01

    We combined a new parametrized density functional tight-binding (DFTB) theory (Fihey et al. 2015) with an unbiased modified basin hopping (MBH) optimization algorithm (Yen and Lai 2015) and applied it to calculate the lowest energy structures of Au clusters. From the calculated topologies and their conformational changes, we find that this DFTB/MBH method is a necessary procedure for a systematic study of the structural development of Au clusters but is somewhat insufficient for a quantitative study. As a result, we propose an extended hybridized algorithm. This improved algorithm proceeds in two steps. In the first step, the DFTB theory is employed to calculate the total energy of the cluster and this step (through running DFTB/MBH optimization for given Monte-Carlo steps) is meant to efficiently bring the Au cluster near to the region of the lowest energy minimum since the cluster as a whole has explicitly considered the interactions of valence electrons with ions, albeit semi-quantitatively. Then, in the second succeeding step, the energy-minimum searching process will continue with a skilledly replacement of the energy function calculated by the DFTB theory in the first step by one calculated in the full density functional theory (DFT). In these subsequent calculations, we couple the DFT energy also with the MBH strategy and proceed with the DFT/MBH optimization until the lowest energy value is found. We checked that this extended hybridized algorithm successfully predicts the twisted pyramidal structure for the Au40 cluster and correctly confirms also the linear shape of C8 which our previous DFTB/MBH method failed to do so. Perhaps more remarkable is the topological growth of Aun: it changes from a planar (n =3-11) → an oblate-like cage (n =12-15) → a hollow-shape cage (n =16-18) and finally a pyramidal-like cage (n =19, 20). These varied forms of the cluster's shapes are consistent with those reported in the literature.

  6. Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles

    PubMed Central

    2016-01-01

    Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522

  7. Development of a living membrane comprising a functional human renal proximal tubule cell monolayer on polyethersulfone polymeric membrane.

    PubMed

    Schophuizen, Carolien M S; De Napoli, Ilaria E; Jansen, Jitske; Teixeira, Sandra; Wilmer, Martijn J; Hoenderop, Joost G J; Van den Heuvel, Lambert P W; Masereeuw, Rosalinde; Stamatialis, Dimitrios

    2015-03-01

    The need for improved renal replacement therapies has stimulated innovative research for the development of a cell-based renal assist device. A key requirement for such a device is the formation of a "living membrane", consisting of a tight kidney cell monolayer with preserved functional organic ion transporters on a suitable artificial membrane surface. In this work, we applied a unique conditionally immortalized proximal tubule epithelial cell (ciPTEC) line with an optimized coating strategy on polyethersulfone (PES) membranes to develop a living membrane with a functional proximal tubule epithelial cell layer. PES membranes were coated with combinations of 3,4-dihydroxy-l-phenylalanine and human collagen IV (Coll IV). The optimal coating time and concentrations were determined to achieve retention of vital blood components while preserving high water transport and optimal ciPTEC adhesion. The ciPTEC monolayers obtained were examined through immunocytochemistry to detect zona occludens 1 tight junction proteins. Reproducible monolayers were formed when using a combination of 2 mg ml(-1) 3,4-dihydroxy-l-phenylalanine (4 min coating, 1h dissolution) and 25 μg ml(-1) Coll IV (4 min coating). The successful transport of (14)C-creatinine through the developed living membrane system was used as an indication for organic cation transporter functionality. The addition of metformin or cimetidine significantly reduced the creatinine transepithelial flux, indicating active creatinine uptake in ciPTECs, most likely mediated by the organic cation transporter, OCT2 (SLC22A2). In conclusion, this study shows the successful development of a living membrane consisting of a reproducible ciPTEC monolayer on PES membranes, an important step towards the development of a bioartificial kidney. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  8. Probabilistic Cloning of Three Real States with Optimal Success Probabilities

    NASA Astrophysics Data System (ADS)

    Rui, Pin-shu

    2017-06-01

    We investigate the probabilistic quantum cloning (PQC) of three real states with average probability distribution. To get the analytic forms of the optimal success probabilities we assume that the three states have only two pairwise inner products. Based on the optimal success probabilities, we derive the explicit form of 1 →2 PQC for cloning three real states. The unitary operation needed in the PQC process is worked out too. The optimal success probabilities are also generalized to the M→ N PQC case.

  9. Stable sequential Kuhn-Tucker theorem in iterative form or a regularized Uzawa algorithm in a regular nonlinear programming problem

    NASA Astrophysics Data System (ADS)

    Sumin, M. I.

    2015-06-01

    A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.

  10. Three VO2+ complexes of the pyridoxal-derived Schiff bases: Synthesis, experimental and theoretical characterizations, and catalytic activity in a cyclocondensation reaction

    NASA Astrophysics Data System (ADS)

    Jafari-Moghaddam, Faezeh; Beyramabadi, S. Ali; Khashi, Maryam; Morsali, Ali

    2018-02-01

    Three oxovanadium(IV) complexes of the pyridoxal Schiff bases have been newly synthesized and characterized. The used Schiff bases were N,N‧-dipyridoxyl(ethylenediamine), N,N‧-dipyridoxyl(1,3-propanediamine) and N,N‧-dipyridoxyl(1,2-benzenediamine). Also, the optimized geometry, assignment of the IR bands and the Natural Bond Orbital (NBO) analysis of the complexes have been computed using the density functional theory (DFT) methods. Dianionic form of the Schiff bases (L2-) acts as a tetradentate N2O2 ligand. The coordinating atoms of the Schiff base are the phenolate oxygens and imine nitrogens, which occupy four base positions of the square-pyramidal geometry of the complexes. The oxo ligand occupies the apical position of the [VO(L)] complexes. In the optimized geometry of the complexes, the coordinated Schiff bases have more planar structure than their free form. Due to the high-energy gaps, all of the complexes are predicted to be stable. Good agreement between the experimental values and the DFT-computed results supports suitability of the optimized geometries for the complexes. The investigated complexes show high catalytic activities in synthesis of the tetrahydrobenzo[b]pyrans through a three-component cyclocondensation reaction of dimedone, malononitrile and some aromatic aldehydes. The complexes catalyzed the reaction in solvent free conditions and the catalysts were found to be reusable.

  11. Dynamic motion planning of 3D human locomotion using gradient-based optimization.

    PubMed

    Kim, Hyung Joo; Wang, Qian; Rahmatalla, Salam; Swan, Colby C; Arora, Jasbir S; Abdel-Malek, Karim; Assouline, Jose G

    2008-06-01

    Since humans can walk with an infinite variety of postures and limb movements, there is no unique solution to the modeling problem to predict human gait motions. Accordingly, we test herein the hypothesis that the redundancy of human walking mechanisms makes solving for human joint profiles and force time histories an indeterminate problem best solved by inverse dynamics and optimization methods. A new optimization-based human-modeling framework is thus described for predicting three-dimensional human gait motions on level and inclined planes. The basic unknowns in the framework are the joint motion time histories of a 25-degree-of-freedom human model and its six global degrees of freedom. The joint motion histories are calculated by minimizing an objective function such as deviation of the trunk from upright posture that relates to the human model's performance. A variety of important constraints are imposed on the optimization problem, including (1) satisfaction of dynamic equilibrium equations by requiring the model's zero moment point (ZMP) to lie within the instantaneous geometrical base of support, (2) foot collision avoidance, (3) limits on ground-foot friction, and (4) vanishing yawing moment. Analytical forms of objective and constraint functions are presented and discussed for the proposed human-modeling framework in which the resulting optimization problems are solved using gradient-based mathematical programming techniques. When the framework is applied to the modeling of bipedal locomotion on level and inclined planes, acyclic human walking motions that are smooth and realistic as opposed to less natural robotic motions are obtained. The aspects of the modeling framework requiring further investigation and refinement, as well as potential applications of the framework in biomechanics, are discussed.

  12. Complexation of nicotinic acid with first generation poly(amidoamine) dendrimers: A microscopic view from density functional theory

    NASA Astrophysics Data System (ADS)

    Badalkhani-Khamseh, Farideh; Bahrami, Aidin; Ebrahim-Habibi, Azadeh; Hadipour, Nasser L.

    2017-09-01

    This study explains some electronic and structural parameters of niacin (NA) encapsulation into PAMAM-G1 dendrimer using DFT calculations. Optimized structural geometries, interaction energies, NMR, NBO, and AIM analyses, in accordance with experiment, revealed that the stability of G1@NA complex can be attributed to the five intermolecular hydrogen bonds formed between the functional groups of G1 and NA. Because of nearing to the experimental results, all the calculations repeated again using a self-consistent reaction field (SCRF) and the polarizable continuum model (PCM) to address the implicit solvent effects and the obtained results were in line with the calculations in gas phase.

  13. Crystal Structure of 17α-Dihydroequilin, C18H22O2, from Synchrotron Powder Diffraction Data and Density Functional Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaduk, James; Gindhart, Amy; Blanton, Thomas

    The crystal structure of 17α-dihydroequilin has been solved and refined using synchrotron X-ray powder diffraction data, and optimized using density functional techniques. 17α-dihydroequilin crystallizes in space group P212121 (#19) with a = 6.76849(1) Å, b = 8.96849(1) Å, c = 23.39031(5) Å, V = 1419.915(3) Å3, and Z = 4. Both hydroxyl groups form hydrogen bonds to each other, resulting in zig-zag chains along the b-axis. The powder diffraction pattern has been submitted to ICDD for inclusion in the Powder Diffraction File™ as the entry 00-066-1608.

  14. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    NASA Astrophysics Data System (ADS)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  15. [The clinical biochemistry of hypo-lipedemic therapy and mechanisms of action of statins: the fatty acids, statins and diabetes mellitus].

    PubMed

    Titov, V N

    2014-02-01

    In liver statins inhibit synthesis of specific pool of cholesterol which is formed de novo by hepatocytes for monolayer of polar lipids at the surface of forming lipoproteins of very low density. The statins, decreasing content of non-esterified cholesterol in monolayer, activate hydrolysis of triglycerides in lipoproteins of very low density, formation of lipoproteins of low density and their absorption by cells through apoB-100 receptors. The statins, activating absorption of lipoproteins of low density, restore functional action of essential polyenoic fatty acids. The essential polyenoic fatty acids, fibrates and glitazones form in cells effective oleic version of metabolism when mitochondrions predominantly oxidize oleic fatty acid. The statins, non-activating oxidation in peroxisomes and inhibiting activity of stearil-KoA-desaturase, form in cells less effective palmitic variant of metabolism of fatty acids under oxidation of palmitic fatty acid in mitochondrions. The fatty acids are not enough under hydrolysis of exogenous triglycerides to synthesize optimal amount of ATP. The fatty acids accumulated in adipocytes are to be used. This is the cause of formation by statins the resistance to insulin. Functionally, lipoproteins of very low density and lipoproteins of low density are phylogenetically different. The former ones transfer fatty acids to cells in the form of triglycerides and the latter ones--in the form of ethers with alcohol cholesterol. The statins normalize absorption of essential polyenoic fatty acids by cells which manifest a physiological action named a pleotropic one.

  16. Linear Subspace Ranking Hashing for Cross-Modal Retrieval.

    PubMed

    Li, Kai; Qi, Guo-Jun; Ye, Jun; Hua, Kien A

    2017-09-01

    Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features' ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.

  17. Expansion and cryopreservation of porcine and human corneal endothelial cells.

    PubMed

    Marquez-Curtis, Leah A; McGann, Locksley E; Elliott, Janet A W

    2017-08-01

    Impairment of the corneal endothelium causes blindness that afflicts millions worldwide and constitutes the most often cited indication for corneal transplants. The scarcity of donor corneas has prompted the alternative use of tissue-engineered grafts which requires the ex vivo expansion and cryopreservation of corneal endothelial cells. The aims of this study are to culture and identify the conditions that will yield viable and functional corneal endothelial cells after cryopreservation. Previously, using human umbilical vein endothelial cells (HUVECs), we employed a systematic approach to optimize the post-thaw recovery of cells with high membrane integrity and functionality. Here, we investigated whether improved protocols for HUVECs translate to the cryopreservation of corneal endothelial cells, despite the differences in function and embryonic origin of these cell types. First, we isolated endothelial cells from pig corneas and then applied an interrupted slow cooling protocol in the presence of dimethyl sulfoxide (Me 2 SO), with or without hydroxyethyl starch (HES). Next, we isolated and expanded endothelial cells from human corneas and applied the best protocol verified using porcine cells. We found that slow cooling at 1 °C/min in the presence of 5% Me 2 SO and 6% HES, followed by rapid thawing after liquid nitrogen storage, yields membrane-intact cells that could form monolayers expressing the tight junction marker ZO-1 and cytoskeleton F-actin, and could form tubes in reconstituted basement membrane matrix. Thus, we show that a cryopreservation protocol optimized for HUVECs can be applied successfully to corneal endothelial cells, and this could provide a means to address the need for off-the-shelf cryopreserved cells for corneal tissue engineering and regenerative medicine. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Pre-operative optimisation of lung function

    PubMed Central

    Azhar, Naheed

    2015-01-01

    The anaesthetic management of patients with pre-existing pulmonary disease is a challenging task. It is associated with increased morbidity in the form of post-operative pulmonary complications. Pre-operative optimisation of lung function helps in reducing these complications. Patients are advised to stop smoking for a period of 4–6 weeks. This reduces airway reactivity, improves mucociliary function and decreases carboxy-haemoglobin. The widely used incentive spirometry may be useful only when combined with other respiratory muscle exercises. Volume-based inspiratory devices have the best results. Pharmacotherapy of asthma and chronic obstructive pulmonary disease must be optimised before considering the patient for elective surgery. Beta 2 agonists, inhaled corticosteroids and systemic corticosteroids, are the main drugs used for this and several drugs play an adjunctive role in medical therapy. A graded approach has been suggested to manage these patients for elective surgery with an aim to achieve optimal pulmonary function. PMID:26556913

  19. Serine phosphorylation by SYK is critical for nuclear localization and transcription factor function of Ikaros

    PubMed Central

    Uckun, Fatih M.; Ma, Hong; Zhang, Jian; Ozer, Zahide; Dovat, Sinisa; Mao, Cheney; Ishkhanian, Rita; Goodman, Patricia; Qazi, Sanjive

    2012-01-01

    Ikaros is a zinc finger-containing DNA-binding protein that plays a pivotal role in immune homeostasis through transcriptional regulation of the earliest stages of lymphocyte ontogeny and differentiation. Functional deficiency of Ikaros has been implicated in the pathogenesis of acute lymphoblastic leukemia, the most common form of childhood cancer. Therefore, a stringent regulation of Ikaros activity is considered of paramount importance, but the operative molecular mechanisms responsible for its regulation remain largely unknown. Here we provide multifaceted genetic and biochemical evidence for a previously unknown function of spleen tyrosine kinase (SYK) as a partner and posttranslational regulator of Ikaros. We demonstrate that SYK phoshorylates Ikaros at unique C-terminal serine phosphorylation sites S358 and S361, thereby augmenting its nuclear localization and sequence-specific DNA binding activity. Mechanistically, we establish that SYK-induced Ikaros activation is essential for its nuclear localization and optimal transcription factor function. PMID:23071339

  20. A slow-releasing form of prostacyclin agonist (ONO1301SR) enhances endogenous secretion of multiple cardiotherapeutic cytokines and improves cardiac function in a rapid-pacing-induced model of canine heart failure.

    PubMed

    Shirasaka, Tomonori; Miyagawa, Shigeru; Fukushima, Satsuki; Saito, Atsuhiro; Shiozaki, Motoko; Kawaguchi, Naomasa; Matsuura, Nariaki; Nakatani, Satoshi; Sakai, Yoshiki; Daimon, Takashi; Okita, Yutaka; Sawa, Yoshiki

    2013-08-01

    Cardiac functional deterioration in dilated cardiomyopathy (DCM) is known to be reversed by intramyocardial up-regulation of multiple cardioprotective factors, whereas a prostacyclin analog, ONO1301, has been shown to paracrinally activate interstitial cells to release a variety of protective factors. We here hypothesized that intramyocardial delivery of a slow-releasing form of ONO1301 (ONO1301SR) might activate regional myocardium to up-regulate cardiotherapeutic factors, leading to regional and global functional recovery in DCM. ONO1301 elevated messenger RNA and protein level of hepatocyte growth factor, vascular endothelial growth factor, and stromal-derived factor-1 of normal human dermal fibroblasts in a dose-dependent manner in vitro. Intramyocardial delivery of ONO1301SR, which is ONO1301 mixed with polylactic and glycolic acid polymer (PLGA), but not that of PLGA only, yielded significant global functional recovery in a canine rapid pacing-induced DCM model, assessed by echocardiography and cardiac catheterization (n = 5 each). Importantly, speckle-tracking echocardiography unveiled significant regional functional recovery in the ONO1301-delivered territory, consistent to significantly increased vascular density, reduced interstitial collagen accumulation, attenuated myocyte hypertrophy, and reversed mitochondrial structure in the corresponding area. Intramyocardial delivery of ONO1301SR, which is a PLGA-coated slow-releasing form of ONO1301, up-regulated multiple cardiotherapeutic factors in the injected territory, leading to region-specific reverse left ventricular remodeling and consequently a global functional recovery in a rapid-pacing-induced canine DCM model, warranting a further preclinical study to optimize this novel drug-delivery system to treat DCM. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  1. Ceratonia siliqua L. hydroethanolic extract obtained by ultrasonication: antioxidant activity, phenolic compounds profile and effects in yogurts functionalized with their free and microencapsulated forms.

    PubMed

    Rached, Irada; Barros, Lillian; Fernandes, Isabel P; Santos-Buelga, Celestino; Rodrigues, Alírio E; Ferchichi, Ali; Barreiro, Maria Filomena; Ferreira, Isabel C F R

    2016-03-01

    Bioactive extracts were obtained from powdered carob pulp through an ultrasound extraction process and then evaluated in terms of antioxidant activity. Ten minutes of ultrasonication at 375 Hz were the optimal conditions leading to an extract with the highest antioxidant effects. After its chemical characterization, which revealed the preponderance of gallotannins, the extract (free and microencapsulated) was incorporated in yogurts. The microspheres were prepared using an extract/sodium alginate ratio of 100/400 (mg mg(-1)) selected after testing different ratios. The yogurts with the free extract exhibited higher antioxidant activity than the samples added with the encapsulated extracts, showing the preserving role of alginate as a coating material. None of the forms significantly altered the yogurt's nutritional value. This study confirmed the efficiency of microencapsulation to stabilize functional ingredients in food matrices maintaining almost the structural integrity of polyphenols extracted from carob pulp and furthermore improving the antioxidant potency of the final product.

  2. Mechanical Characteristics of SiC Coating Layer in TRISO Fuel Particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. Hosemann; J. N. Martos; D. Frazer

    2013-11-01

    Tristructural isotropic (TRISO) particles are considered as advanced fuel forms for a variety of fission platforms. While these fuel structures have been tested and deployed in reactors, the mechanical properties of these structures as a function of production parameters need to be investigated in order to ensure their reliability during service. Nanoindentation techniques, indentation crack testing, and half sphere crush testing were utilized in order to evaluate the integrity of the SiC coating layer that is meant to prevent fission product release in the coated particle fuel form. The results are complimented by scanning electron microscopy (SEM) of the grainmore » structure that is subject to change as a function of processing parameters and can alter the mechanical properties such as hardness, elastic modulus, fracture toughness and fracture strength. Through utilization of these advanced techniques, subtle differences in mechanical properties that can be important for in-pile fuel performance can be distinguished and optimized in iteration with processing science of coated fuel particle production.« less

  3. Linear parameter varying representations for nonlinear control design

    NASA Astrophysics Data System (ADS)

    Carter, Lance Huntington

    Linear parameter varying (LPV) systems are investigated as a framework for gain-scheduled control design and optimal hybrid control. An LPV system is defined as a linear system whose dynamics depend upon an a priori unknown but measurable exogenous parameter. A gain-scheduled autopilot design is presented for a bank-to-turn (BTT) missile. The method is novel in that the gain-scheduled design does not involve linearizations about operating points. Instead, the missile dynamics are brought to LPV form via a state transformation. This idea is applied to the design of a coupled longitudinal/lateral BTT missile autopilot. The pitch and yaw/roll dynamics are separately transformed to LPV form, where the cross axis states are treated as "exogenous" parameters. These are actually endogenous variables, so such a plant is called "quasi-LPV." Once in quasi-LPV form, a family of robust controllers using mu synthesis is designed for both the pitch and yaw/roll channels, using angle-of-attack and roll rate as the scheduling variables. The closed-loop time response is simulated using the original nonlinear model and also using perturbed aerodynamic coefficients. Modeling and control of engine idle speed is investigated using LPV methods. It is shown how generalized discrete nonlinear systems may be transformed into quasi-LPV form. A discrete nonlinear engine model is developed and expressed in quasi-LPV form with engine speed as the scheduling variable. An example control design is presented using linear quadratic methods. Simulations are shown comparing the LPV based controller performance to that using PID control. LPV representations are also shown to provide a setting for hybrid systems. A hybrid system is characterized by control inputs consisting of both analog signals and discrete actions. A solution is derived for the optimal control of hybrid systems with generalized cost functions. This is shown to be computationally intensive, so a suboptimal strategy is proposed that neglects a subset of possible parameter trajectories. A computational algorithm is constructed for this suboptimal solution applied to a class of linear non-quadratic cost functions.

  4. Optimal and robust control of a class of nonlinear systems using dynamically re-optimised single network adaptive critic design

    NASA Astrophysics Data System (ADS)

    Tiwari, Shivendra N.; Padhi, Radhakant

    2018-01-01

    Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.

  5. A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy

    PubMed Central

    Wen, Hui; Xie, Weixin; Pei, Jihong

    2016-01-01

    This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms. PMID:27792737

  6. Arrangement Analysis of Leaves Optimized on Photon Flux Density or Photosynthetic Rate

    NASA Astrophysics Data System (ADS)

    Obara, Shin'ya; Tanno, Itaru

    By clarifying a plant evolutive process, useful information may be obtained on engineering. Consequently, an analysis algorithm that investigates the optimal arrangement of plant leaves was developed. In the developed algorithm, the Monte Carlo method is introduced and sunlight is simulated. Moreover, the arrangement optimization of leaves is analyzed using a Genetic Algorithm (GA). The number of light quanta (photon flux density) that reaches leaves, or the average photosynthetic rate of the same was set as the objective function, and leaf models of a dogwood and a ginkgo tree were analyzed. The number of leaf models was set between two to four, and the position of the leaf was expressed in terms of the angle of direction, elevation angle, rotation angle, and the representative length of the branch of a leaf. The chromosome model introduced into GA consists of information concerning the position of the leaf. Based on the analysis results, the characteristics of the leaf of an actual plant could be simulated by ensuring the algorithm had multiple constrained conditions. The optimal arrangement of leaves differs in maximization of the photon flux density, and that of the average value of a photosynthetic rate. Furthermore, the leaf form affecting the optimal arrangement of leave and also having a significant influence also on a photosynthetic rate was shown.

  7. A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Ortiz, Francisco

    2004-01-01

    COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.

  8. Stochastic fluctuations and the detectability limit of network communities.

    PubMed

    Floretta, Lucio; Liechti, Jonas; Flammini, Alessandro; De Los Rios, Paolo

    2013-12-01

    We have analyzed the detectability limits of network communities in the framework of the popular Girvan and Newman benchmark. By carefully taking into account the inevitable stochastic fluctuations that affect the construction of each and every instance of the benchmark, we come to the conclusion that the native, putative partition of the network is completely lost even before the in-degree/out-degree ratio becomes equal to that of a structureless Erdös-Rényi network. We develop a simple iterative scheme, analytically well described by an infinite branching process, to provide an estimate of the true detectability limit. Using various algorithms based on modularity optimization, we show that all of them behave (semiquantitatively) in the same way, with the same functional form of the detectability threshold as a function of the network parameters. Because the same behavior has also been found by further modularity-optimization methods and for methods based on different heuristics implementations, we conclude that indeed a correct definition of the detectability limit must take into account the stochastic fluctuations of the network construction.

  9. The structural, electronic and spectroscopic properties of 4FPBAPE molecule: Experimental and theoretical study

    NASA Astrophysics Data System (ADS)

    Tanış, Emine; Babur Sas, Emine; Kurban, Mustafa; Kurt, Mustafa

    2018-02-01

    The experimental and theoretical study of 4-Formyl Phenyl Boronic Acid Pinacol Ester (4FPBAPE) molecule were performed in this work. 1H, 13C NMR and UV-Vis spectra were tested in dimethyl sulfoxide (DMSO). The structural, spectroscopic properties and energies of 4FPBAPE were obtained for two potential conformers from density functional theory (DFT) with B3LYP/6-311G (d, p) and CAM-B3LYP/6-311G (d, p) basis sets. The optimal geometry of those structures was obtained according to the position of oxygen atom upon determining the scan coordinates for each conformation. The most stable conformer was found as the A2 form. The fundamental vibrations were determined based on optimized structure in terms of total energy distribution. Electronic properties such as oscillator strength, wavelength, excitation energy, HOMO, LUMO and molecular electrostatic potential and structural properties such as radial distribution functions (RDF) and probability density depending on coordination number are presented. Theoretical results of 4-FPBAPE spectra were found to be compatible with observed spectra.

  10. Vortex ring behavior provides the epigenetic blueprint for the human heart

    PubMed Central

    Arvidsson, Per M.; Kovács, Sándor J.; Töger, Johannes; Borgquist, Rasmus; Heiberg, Einar; Carlsson, Marcus; Arheden, Håkan

    2016-01-01

    The laws of fluid dynamics govern vortex ring formation and precede cardiac development by billions of years, suggesting that diastolic vortex ring formation is instrumental in defining the shape of the heart. Using novel and validated magnetic resonance imaging measurements, we show that the healthy left ventricle moves in tandem with the expanding vortex ring, indicating that cardiac form and function is epigenetically optimized to accommodate vortex ring formation for volume pumping. Healthy hearts demonstrate a strong coupling between vortex and cardiac volumes (R2 = 0.83), but this optimized phenotype is lost in heart failure, suggesting restoration of normal vortex ring dynamics as a new, and possibly important consideration for individualized heart failure treatment. Vortex ring volume was unrelated to early rapid filling (E-wave) velocity in patients and controls. Characteristics of vortex-wall interaction provide unique physiologic and mechanistic information about cardiac diastolic function that may be applied to guide the design and implantation of prosthetic valves, and have potential clinical utility as therapeutic targets for tailored medicine or measures of cardiac health. PMID:26915473

  11. Vortex ring behavior provides the epigenetic blueprint for the human heart.

    PubMed

    Arvidsson, Per M; Kovács, Sándor J; Töger, Johannes; Borgquist, Rasmus; Heiberg, Einar; Carlsson, Marcus; Arheden, Håkan

    2016-02-26

    The laws of fluid dynamics govern vortex ring formation and precede cardiac development by billions of years, suggesting that diastolic vortex ring formation is instrumental in defining the shape of the heart. Using novel and validated magnetic resonance imaging measurements, we show that the healthy left ventricle moves in tandem with the expanding vortex ring, indicating that cardiac form and function is epigenetically optimized to accommodate vortex ring formation for volume pumping. Healthy hearts demonstrate a strong coupling between vortex and cardiac volumes (R(2) = 0.83), but this optimized phenotype is lost in heart failure, suggesting restoration of normal vortex ring dynamics as a new, and possibly important consideration for individualized heart failure treatment. Vortex ring volume was unrelated to early rapid filling (E-wave) velocity in patients and controls. Characteristics of vortex-wall interaction provide unique physiologic and mechanistic information about cardiac diastolic function that may be applied to guide the design and implantation of prosthetic valves, and have potential clinical utility as therapeutic targets for tailored medicine or measures of cardiac health.

  12. Designers' unified cost model

    NASA Technical Reports Server (NTRS)

    Freeman, W.; Ilcewicz, L.; Swanson, G.; Gutowski, T.

    1992-01-01

    The Structures Technology Program Office (STPO) at NASA LaRC has initiated development of a conceptual and preliminary designers' cost prediction model. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state-of-the-art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a database and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. This paper presents the team members, approach, goals, plans, and progress to date for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  13. Adjoint-Based Mesh Adaptation for the Sonic Boom Signature Loudness

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.; Park, Michael A.

    2017-01-01

    The mesh adaptation functionality of FUN3D is utilized to obtain a mesh optimized to calculate sonic boom ground signature loudness. During this process, the coupling between the discrete-adjoints of the computational fluid dynamics tool FUN3D and the atmospheric propagation tool sBOOM is exploited to form the error estimate. This new mesh adaptation methodology will allow generation of suitable meshes adapted to reduce the estimated errors in the ground loudness, which is an optimization metric employed in supersonic aircraft design. This new output-based adaptation could allow new insights into meshing for sonic boom analysis and design, and complements existing output-based adaptation techniques such as adaptation to reduce estimated errors in off-body pressure functional. This effort could also have implications for other coupled multidisciplinary adjoint capabilities (e.g., aeroelasticity) as well as inclusion of propagation specific parameters such as prevailing winds or non-standard atmospheric conditions. Results are discussed in the context of existing methods and appropriate conclusions are drawn as to the efficacy and efficiency of the developed capability.

  14. The mechanisms of granulation of activated sludge in wastewater treatment, its optimization, and impact on effluent quality.

    PubMed

    Wilén, Britt-Marie; Liébana, Raquel; Persson, Frank; Modin, Oskar; Hermansson, Malte

    2018-06-01

    Granular activated sludge has gained increasing interest due to its potential in treating wastewater in a compact and efficient way. It is well-established that activated sludge can form granules under certain environmental conditions such as batch-wise operation with feast-famine feeding, high hydrodynamic shear forces, and short settling time which select for dense microbial aggregates. Aerobic granules with stable structure and functionality have been obtained with a range of different wastewaters seeded with different sources of sludge at different operational conditions, but the microbial communities developed differed substantially. In spite of this, granule instability occurs. In this review, the available literature on the mechanisms involved in granulation and how it affects the effluent quality is assessed with special attention given to the microbial interactions involved. To be able to optimize the process further, more knowledge is needed regarding the influence of microbial communities and their metabolism on granule stability and functionality. Studies performed at conditions similar to full-scale such as fluctuation in organic loading rate, hydrodynamic conditions, temperature, incoming particles, and feed water microorganisms need further investigations.

  15. Cost effective simulation-based multiobjective optimization in the performance of an internal combustion engine

    NASA Astrophysics Data System (ADS)

    Aittokoski, Timo; Miettinen, Kaisa

    2008-07-01

    Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.

  16. Band gap characterization of ternary BBi1-xNx (0≤x≤1) alloys using modified Becke-Johnson (mBJ) potential

    NASA Astrophysics Data System (ADS)

    Yalcin, Battal G.

    2015-04-01

    The semi-local Becke-Johnson (BJ) exchange-correlation potential and its modified form proposed by Tran and Blaha have attracted a lot of interest recently because of the surprisingly accurate band gaps they can deliver for many semiconductors and insulators (e.g., sp semiconductors, noble-gas solids, and transition-metal oxides). The structural and electronic properties of ternary alloys BBi1-xNx (0≤x≤1) in zinc-blende phase have been reported in this study. The results of the studied binary compounds (BN and BBi) and ternary alloys BBi1-xNx structures are presented by means of density functional theory. The exchange and correlation effects are taken into account by using the generalized gradient approximation (GGA) functional of Wu and Cohen (WC) which is an improved form of the most popular Perdew-Burke-Ernzerhof (PBE). For electronic properties the modified Becke-Johnson (mBJ) potential, which is more accurate than standard semi-local LDA and PBE calculations, has been chosen. Geometric optimization has been implemented before the volume optimization calculations for all the studied alloys structure. The obtained equilibrium lattice constants of the studied binary compounds are in coincidence with experimental works. And, the variation of the lattice parameter of ternary alloys BBi1-xNx almost perfectly matches with Vegard's law. The spin-orbit interaction (SOI) has been also considered for structural and electronic calculations and the results are compared to those of non-SOI calculations.

  17. Optimal Spatial Design of Capacity and Quantity of Rainwater Catchment Systems for Urban Flood Mitigation

    NASA Astrophysics Data System (ADS)

    Huang, C.; Hsu, N.

    2013-12-01

    This study imports Low-Impact Development (LID) technology of rainwater catchment systems into a Storm-Water runoff Management Model (SWMM) to design the spatial capacity and quantity of rain barrel for urban flood mitigation. This study proposes a simulation-optimization model for effectively searching the optimal design. In simulation method, we design a series of regular spatial distributions of capacity and quantity of rainwater catchment facilities, and thus the reduced flooding circumstances using a variety of design forms could be simulated by SWMM. Moreover, we further calculate the net benefit that is equal to subtract facility cost from decreasing inundation loss and the best solution of simulation method would be the initial searching solution of the optimization model. In optimizing method, first we apply the outcome of simulation method and Back-Propagation Neural Network (BPNN) for developing a water level simulation model of urban drainage system in order to replace SWMM which the operating is based on a graphical user interface and is hard to combine with optimization model and method. After that we embed the BPNN-based simulation model into the developed optimization model which the objective function is minimizing the negative net benefit. Finally, we establish a tabu search-based algorithm to optimize the planning solution. This study applies the developed method in Zhonghe Dist., Taiwan. Results showed that application of tabu search and BPNN-based simulation model into the optimization model not only can find better solutions than simulation method in 12.75%, but also can resolve the limitations of previous studies. Furthermore, the optimized spatial rain barrel design can reduce 72% of inundation loss according to historical flood events.

  18. Statistical estimation via convex optimization for trending and performance monitoring

    NASA Astrophysics Data System (ADS)

    Samar, Sikandar

    This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.

  19. Hamiltonian Systems and Optimal Control in Computational Anatomy: 100 Years Since D'Arcy Thompson.

    PubMed

    Miller, Michael I; Trouvé, Alain; Younes, Laurent

    2015-01-01

    The Computational Anatomy project is the morphome-scale study of shape and form, which we model as an orbit under diffeomorphic group action. Metric comparison calculates the geodesic length of the diffeomorphic flow connecting one form to another. Geodesic connection provides a positioning system for coordinatizing the forms and positioning their associated functional information. This article reviews progress since the Euler-Lagrange characterization of the geodesics a decade ago. Geodesic positioning is posed as a series of problems in Hamiltonian control, which emphasize the key reduction from the Eulerian momentum with dimension of the flow of the group, to the parametric coordinates appropriate to the dimension of the submanifolds being positioned. The Hamiltonian viewpoint provides important extensions of the core setting to new, object-informed positioning systems. Several submanifold mapping problems are discussed as they apply to metamorphosis, multiple shape spaces, and longitudinal time series studies of growth and atrophy via shape splines.

  20. Parameter setting for peak fitting method in XPS analysis of nitrogen in sewage sludge

    NASA Astrophysics Data System (ADS)

    Tang, Z. J.; Fang, P.; Huang, J. H.; Zhong, P. Y.

    2017-12-01

    Thermal decomposition method is regarded as an important route to treat increasing sewage sludge, while the high content of N causes serious nitrogen related problems, then figuring out the existing form and content of nitrogen of sewage sludge become essential. In this study, XPSpeak 4.1 was used to investigate the functional forms of nitrogen in sewage sludge, peak fitting method was adopted and the best-optimized parameters were determined. According to the result, the N1s spectra curve can be resolved into 5 peaks: pyridine-N (398.7±0.4eV), pyrrole-N(400.5±0.3eV), protein-N(400.4eV), ammonium-N(401.1±0.3eV) and nitrogen oxide-N(403.5±0.5eV). Based on the the experimental data obtained from elemental analysis and spectrophotometry method, the optimum parameters of curve fitting method were decided: background type: Tougaard, FWHM 1.2, 50% Lorentzian-Gaussian. XPS methods can be used as a practical tool to analysis the nitrogen functional groups of sewage sludge, which can reflect the real content of nitrogen of different forms.

  1. Melatonin charge transfer complex with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone: Molecular structure, DFT studies, thermal analyses, evaluation of biological activity and utility for determination of melatonin in pure and dosage forms

    NASA Astrophysics Data System (ADS)

    Mohamed, Gehad G.; Hamed, Maher M.; Zaki, Nadia G.; Abdou, Mohamed M.; Mohamed, Marwa El-Badry; Abdallah, Abanoub Mosaad

    2017-07-01

    A simple, accurate and fast spectrophotometric method for the quantitative determination of melatonin (ML) drug in its pure and pharmaceutical forms was developed based on the formation of its charge transfer complex with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) as an electron acceptor. The different conditions for this method were optimized accurately. The Lambert-Beer's law was found to be valid over the concentration range of 4-100 μg mL- 1 ML. The solid form of the CT complex was structurally characterized by means of different spectral methods. Density functional theory (DFT) and time-dependent density functional theory (TD-DFT) calculations were carried out. The different quantum chemical parameters of the CT complex were calculated. Thermal properties of the CT complex and its kinetic thermodynamic parameters were studied, as well as its antimicrobial and antifungal activities were investigated. Molecular docking studies were performed to predict the binding modes of the CT complex components towards E. coli bacterial RNA and the receptor of breast cancer mutant oxidoreductase.

  2. Melatonin charge transfer complex with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone: Molecular structure, DFT studies, thermal analyses, evaluation of biological activity and utility for determination of melatonin in pure and dosage forms.

    PubMed

    Mohamed, Gehad G; Hamed, Maher M; Zaki, Nadia G; Abdou, Mohamed M; Mohamed, Marwa El-Badry; Abdallah, Abanoub Mosaad

    2017-07-05

    A simple, accurate and fast spectrophotometric method for the quantitative determination of melatonin (ML) drug in its pure and pharmaceutical forms was developed based on the formation of its charge transfer complex with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) as an electron acceptor. The different conditions for this method were optimized accurately. The Lambert-Beer's law was found to be valid over the concentration range of 4-100μgmL -1 ML. The solid form of the CT complex was structurally characterized by means of different spectral methods. Density functional theory (DFT) and time-dependent density functional theory (TD-DFT) calculations were carried out. The different quantum chemical parameters of the CT complex were calculated. Thermal properties of the CT complex and its kinetic thermodynamic parameters were studied, as well as its antimicrobial and antifungal activities were investigated. Molecular docking studies were performed to predict the binding modes of the CT complex components towards E. coli bacterial RNA and the receptor of breast cancer mutant oxidoreductase. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Evaluation of DICOM viewer software for workflow integration in clinical trials

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Page, Charles E.; Kabino, Klaus; Deserno, Thomas M.

    2015-03-01

    The digital imaging and communications in medicine (DICOM) protocol is nowadays the leading standard for capture, exchange and storage of image data in medical applications. A broad range of commercial, free, and open source software tools supporting a variety of DICOM functionality exists. However, different from patient's care in hospital, DICOM has not yet arrived in electronic data capture systems (EDCS) for clinical trials. Due to missing integration, even just the visualization of patient's image data in electronic case report forms (eCRFs) is impossible. Four increasing levels for integration of DICOM components into EDCS are conceivable, raising functionality but also demands on interfaces with each level. Hence, in this paper, a comprehensive evaluation of 27 DICOM viewer software projects is performed, investigating viewing functionality as well as interfaces for integration. Concerning general, integration, and viewing requirements the survey involves the criteria (i) license, (ii) support, (iii) platform, (iv) interfaces, (v) two-dimensional (2D) and (vi) three-dimensional (3D) image viewing functionality. Optimal viewers are suggested for applications in clinical trials for 3D imaging, hospital communication, and workflow. Focusing on open source solutions, the viewers ImageJ and MicroView are superior for 3D visualization, whereas GingkoCADx is advantageous for hospital integration. Concerning workflow optimization in multi-centered clinical trials, we suggest the open source viewer Weasis. Covering most use cases, an EDCS and PACS interconnection with Weasis is suggested.

  4. Nonseparable exchange–correlation functional for molecules, including homogeneous catalysis involving transition metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Haoyu S.; Zhang, Wenjing; Verma, Pragya

    2015-01-01

    The goal of this work is to develop a gradient approximation to the exchange–correlation functional of Kohn–Sham density functional theory for treating molecular problems with a special emphasis on the prediction of quantities important for homogeneous catalysis and other molecular energetics. Our training and validation of exchange–correlation functionals is organized in terms of databases and subdatabases. The key properties required for homogeneous catalysis are main group bond energies (database MGBE137), transition metal bond energies (database TMBE32), reaction barrier heights (database BH76), and molecular structures (database MS10). We also consider 26 other databases, most of which are subdatabases of a newlymore » extended broad database called Database 2015, which is presented in the present article and in its ESI. Based on the mathematical form of a nonseparable gradient approximation (NGA), as first employed in the N12 functional, we design a new functional by using Database 2015 and by adding smoothness constraints to the optimization of the functional. The resulting functional is called the gradient approximation for molecules, or GAM. The GAM functional gives better results for MGBE137, TMBE32, and BH76 than any available generalized gradient approximation (GGA) or than N12. The GAM functional also gives reasonable results for MS10 with an MUE of 0.018 Å. The GAM functional provides good results both within the training sets and outside the training sets. The convergence tests and the smooth curves of exchange–correlation enhancement factor as a function of the reduced density gradient show that the GAM functional is a smooth functional that should not lead to extra expense or instability in optimizations. NGAs, like GGAs, have the advantage over meta-GGAs and hybrid GGAs of respectively smaller grid-size requirements for integrations and lower costs for extended systems. These computational advantages combined with the relatively high accuracy for all the key properties needed for molecular catalysis make the GAM functional very promising for future applications.« less

  5. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection

    PubMed Central

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme

    2016-01-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  6. Development of Probiotic Formulation for the Treatment of Iron Deficiency Anemia.

    PubMed

    Korčok, Davor Jovan; Tršić-Milanović, Nada Aleksandar; Ivanović, Nevena Djuro; Đorđević, Brižita Ivan

    2018-04-01

    Probiotics are increasingly more present both as functional foods, and in pharmaceutical preparations with multiple levels of action that contribute to human health. Probiotics realize their positive effects with a proper dose, and by maintaining a declared number of probiotics cells by the expiration date. Important precondition for developing a probiotic product is the right choice of clinically proven probiotic strain, the choice of other active components, as well as, the optimization of the quantity of active component of probiotic per product dose. This scientific paper describes the optimization of the number of probiotics cells in the formulation of dietary supplement that contains probiotic culture Lactobacillus plantarum 299v, iron and vitamin C. Variations of the quantity of active component were analyzed in development batches of the encapsulated probiotic product categorized as dietary supplement with the following ingredients: probiotic culture, sucrosomal form of iron and vitamin C. Optimal quantity of active component L. plantarum of 50 mg, was selected. The purpose of this scientific paper is to select the optimal formulation of probiotic culture in a dietary supplement that contains iron and vitamin C, and to also determine its expiration date by the analysis of the number of viable probiotic cells.

  7. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    PubMed

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  8. Improved HPLC method with the aid of chemometric strategy: determination of loxoprofen in pharmaceutical formulation.

    PubMed

    Venkatesan, P; Janardhanan, V Sree; Muralidharan, C; Valliappan, K

    2012-06-01

    Loxoprofen belongs to a class of Nonsteroidal anti-inflammatory drug acts by inhibiting isoforms of cyclo-oxygenase 1 and 2. In this study an improved RP-HPLC method was developed for the quantification of loxoprofen in pharmaceutical dosage form. For that purpose an experimental design approach was employed. Factors-independent variables (organic modifier, pH of the mobile phase and flow rate) were extracted from the preliminary study and as dependent variables three responses (loxoprofen retention factor, resolution between loxoprofen probenecid and retention time of probenecid) were selected. For the improvement of method development and optimization step, Derringer's desirability function was applied to simultaneously optimize the chosen three responses. The procedure allowed deduction of optimal conditions and the predicted optimum was acetonitrile: water (53:47, v/v), pH of the mobile phase adjusted at to 2.9 with ortho phosphoric acid. The separation was achieved in less than 4minutes. The method was applied in the quality control of commercial tablets. The method showed good agreement between the experimental data and predictive value throughout the studied parameter space. The optimized assay condition was validated according to International conference on harmonisation guidelines to confirm specificity, linearity, accuracy and precision.

  9. A novel method for energy harvesting simulation based on scenario generation

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Li, Taoshen; Xiao, Nan; Ye, Jin; Wu, Min

    2018-06-01

    Energy harvesting network (EHN) is a new form of computer networks. It converts ambient energy into usable electric energy and supply the electrical energy as a primary or secondary power source to the communication devices. However, most of the EHN uses the analytical probability distribution function to describe the energy harvesting process, which cannot accurately identify the actual situation for the lack of authenticity. We propose an EHN simulation method based on scenario generation in this paper. Firstly, instead of setting a probability distribution in advance, it uses optimal scenario reduction technology to generate representative scenarios in single period based on the historical data of the harvested energy. Secondly, it uses homogeneous simulated annealing algorithm to generate optimal daily energy harvesting scenario sequences to get a more accurate simulation of the random characteristics of the energy harvesting network. Then taking the actual wind power data as an example, the accuracy and stability of the method are verified by comparing with the real data. Finally, we cite an instance to optimize the network throughput, which indicate the feasibility and effectiveness of the method we proposed from the optimal solution and data analysis in energy harvesting simulation.

  10. Non-additivity of functional group contributions in protein-ligand binding: a comprehensive study by crystallography and isothermal titration calorimetry.

    PubMed

    Baum, Bernhard; Muley, Laveena; Smolinski, Michael; Heine, Andreas; Hangauer, David; Klebe, Gerhard

    2010-04-09

    Additivity of functional group contributions to protein-ligand binding is a very popular concept in medicinal chemistry as the basis of rational design and optimized lead structures. Most of the currently applied scoring functions for docking build on such additivity models. Even though the limitation of this concept is well known, case studies examining in detail why additivity fails at the molecular level are still very scarce. The present study shows, by use of crystal structure analysis and isothermal titration calorimetry for a congeneric series of thrombin inhibitors, that extensive cooperative effects between hydrophobic contacts and hydrogen bond formation are intimately coupled via dynamic properties of the formed complexes. The formation of optimal lipophilic contacts with the surface of the thrombin S3 pocket and the full desolvation of this pocket can conflict with the formation of an optimal hydrogen bond between ligand and protein. The mutual contributions of the competing interactions depend on the size of the ligand hydrophobic substituent and influence the residual mobility of ligand portions at the binding site. Analysis of the individual crystal structures and factorizing the free energy into enthalpy and entropy demonstrates that binding affinity of the ligands results from a mixture of enthalpic contributions from hydrogen bonding and hydrophobic contacts, and entropic considerations involving an increasing loss of residual mobility of the bound ligands. This complex picture of mutually competing and partially compensating enthalpic and entropic effects determines the non-additivity of free energy contributions to ligand binding at the molecular level. (c) 2010 Elsevier Ltd. All rights reserved.

  11. Intrauterine exposure to tobacco and executive functioning in high school.

    PubMed

    Rose-Jacobs, Ruth; Richardson, Mark A; Buchanan-Howland, Kathryn; Chen, Clara A; Cabral, Howard; Heeren, Timothy C; Liebschutz, Jane; Forman, Leah; Frank, Deborah A

    2017-07-01

    Executive functioning (EF), an umbrella construct encompassing gradual maturation of cognitive organization/management processes, is important to success in multiple settings including high school. Intrauterine tobacco exposure (IUTE) correlates with negative cognitive/behavioral outcomes, but little is known about its association with adolescent EF and information from real-life contexts is sparse. We evaluated the impact of IUTE on teacher-reported observations of EF in urban high school students controlling for covariates including other intrauterine and adolescent substance exposures. A prospective low-income birth cohort (51% male; 89% African American/Caribbean) was followed through late adolescence (16-18 years old). At birth, intrauterine exposures to cocaine and other substances (52% cocaine, 52% tobacco, 26% marijuana, 26% alcohol) were identified by meconium and/or urine assays, and/or maternal self-report. High school teachers knowledgeable about the student and unaware of study aims were asked to complete the Behavior Rating Inventory of Executive Functioning-Teacher Form (BRIEF-TF) annually. Teachers completed at least one BRIEF-TF for 131 adolescents. Multivariable analyses included controls for: demographics; intrauterine cocaine, marijuana, and alcohol exposures; early childhood exposures to lead; and violence exposure from school-age to adolescence. IUTE was associated with less optimal BRIEF-TF Behavioral Regulation scores (p <0.05). Other intrauterine substance exposures did not predict less optimal BRIEF-TF scores, nor did exposures to violence, lead, nor adolescents' own substance use. IUTE is associated with offspring's less optimal EF. Prenatal counseling should emphasize abstinence from tobacco, as well as alcohol and illegal substances. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Extension of the KLI approximation toward the exact optimized effective potential.

    PubMed

    Iafrate, G J; Krieger, J B

    2013-03-07

    The integral equation for the optimized effective potential (OEP) is utilized in a compact form from which an accurate OEP solution for the spin-unrestricted exchange-correlation potential, Vxcσ, is obtained for any assumed orbital-dependent exchange-correlation energy functional. The method extends beyond the Krieger-Li-Iafrate (KLI) approximation toward the exact OEP result. The compact nature of the OEP equation arises by replacing the integrals involving the Green's function terms in the traditional OEP equation by an equivalent first-order perturbation theory wavefunction often referred to as the "orbital shift" function. Significant progress is then obtained by solving the equation for the first order perturbation theory wavefunction by use of Dalgarno functions which are determined from well known methods of partial differential equations. The use of Dalgarno functions circumvents the need to explicitly address the Green's functions and the associated problems with "sum over states" numerics; as well, the Dalgarno functions provide ease in dealing with inherent singularities arising from the origin and the zeros of the occupied orbital wavefunctions. The Dalgarno approach for finding a solution to the OEP equation is described herein, and a detailed illustrative example is presented for the special case of a spherically symmetric exchange-correlation potential. For the case of spherical symmetry, the relevant Dalgarno function is derived by direct integration of the appropriate radial equation while utilizing a user friendly method which explicitly treats the singular behavior at the origin and at the nodal singularities arising from the zeros of the occupied states. The derived Dalgarno function is shown to be an explicit integral functional of the exact OEP Vxcσ, thus allowing for the reduction of the OEP equation to a self-consistent integral equation for the exact exchange-correlation potential; the exact solution to this integral equation can be determined by iteration with the natural zeroth order correction given by the KLI exchange-correlation potential. Explicit analytic results are provided to illustrate the first order iterative correction beyond the KLI approximation. The derived correction term to the KLI potential explicitly involves spatially weighted products of occupied orbital densities in any assumed orbital-dependent exchange-correlation energy functional; as well, the correction term is obtained with no adjustable parameters. Moreover, if the equation for the exact optimized effective potential is further iterated, one can obtain the OEP as accurately as desired.

  13. Extension of the KLI approximation toward the exact optimized effective potential

    NASA Astrophysics Data System (ADS)

    Iafrate, G. J.; Krieger, J. B.

    2013-03-01

    The integral equation for the optimized effective potential (OEP) is utilized in a compact form from which an accurate OEP solution for the spin-unrestricted exchange-correlation potential, Vxcσ, is obtained for any assumed orbital-dependent exchange-correlation energy functional. The method extends beyond the Krieger-Li-Iafrate (KLI) approximation toward the exact OEP result. The compact nature of the OEP equation arises by replacing the integrals involving the Green's function terms in the traditional OEP equation by an equivalent first-order perturbation theory wavefunction often referred to as the "orbital shift" function. Significant progress is then obtained by solving the equation for the first order perturbation theory wavefunction by use of Dalgarno functions which are determined from well known methods of partial differential equations. The use of Dalgarno functions circumvents the need to explicitly address the Green's functions and the associated problems with "sum over states" numerics; as well, the Dalgarno functions provide ease in dealing with inherent singularities arising from the origin and the zeros of the occupied orbital wavefunctions. The Dalgarno approach for finding a solution to the OEP equation is described herein, and a detailed illustrative example is presented for the special case of a spherically symmetric exchange-correlation potential. For the case of spherical symmetry, the relevant Dalgarno function is derived by direct integration of the appropriate radial equation while utilizing a user friendly method which explicitly treats the singular behavior at the origin and at the nodal singularities arising from the zeros of the occupied states. The derived Dalgarno function is shown to be an explicit integral functional of the exact OEP Vxcσ, thus allowing for the reduction of the OEP equation to a self-consistent integral equation for the exact exchange-correlation potential; the exact solution to this integral equation can be determined by iteration with the natural zeroth order correction given by the KLI exchange-correlation potential. Explicit analytic results are provided to illustrate the first order iterative correction beyond the KLI approximation. The derived correction term to the KLI potential explicitly involves spatially weighted products of occupied orbital densities in any assumed orbital-dependent exchange-correlation energy functional; as well, the correction term is obtained with no adjustable parameters. Moreover, if the equation for the exact optimized effective potential is further iterated, one can obtain the OEP as accurately as desired.

  14. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  15. Individual Functional ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles

    PubMed Central

    Li, Kaiming; Guo, Lei; Zhu, Dajiang; Hu, Xintao; Han, Junwei; Liu, Tianming

    2013-01-01

    Studying connectivities among functional brain regions and the functional dynamics on brain networks has drawn increasing interest. A fundamental issue that affects functional connectivity and dynamics studies is how to determine the best possible functional brain regions or ROIs (regions of interest) for a group of individuals, since the connectivity measurements are heavily dependent on ROI locations. Essentially, identification of accurate, reliable and consistent corresponding ROIs is challenging due to the unclear boundaries between brain regions, variability across individuals, and nonlinearity of the ROIs. In response to these challenges, this paper presents a novel methodology to computationally optimize ROIs locations derived from task-based fMRI data for individuals so that the optimized ROIs are more consistent, reproducible and predictable across brains. Our computational strategy is to formulate the individual ROI location optimization as a group variance minimization problem, in which group-wise consistencies in functional/structural connectivity patterns and anatomic profiles are defined as optimization constraints. Our experimental results from multimodal fMRI and DTI data show that the optimized ROIs have significantly improved consistency in structural and functional profiles across individuals. These improved functional ROIs with better consistency could contribute to further study of functional interaction and dynamics in the human brain. PMID:22281931

  16. Convergence analysis of evolutionary algorithms that are based on the paradigm of information geometry.

    PubMed

    Beyer, Hans-Georg

    2014-01-01

    The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered.

  17. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Regulatory Phosphorylation of Ikaros by Bruton's Tyrosine Kinase

    PubMed Central

    Zhang, Jian; Ishkhanian, Rita; Uckun, Fatih M.

    2013-01-01

    Diminished Ikaros function has been implicated in the pathogenesis of acute lymphoblastic leukemia (ALL), the most common form of childhood cancer. Therefore, a stringent regulation of Ikaros is of paramount importance for normal lymphocyte ontogeny. Here we provide genetic and biochemical evidence for a previously unknown function of Bruton's tyrosine kinase (BTK) as a partner and posttranslational regulator of Ikaros, a zinc finger-containing DNA-binding protein that plays a pivotal role in immune homeostasis. We demonstrate that BTK phosphorylates Ikaros at unique phosphorylation sites S214 and S215 in the close vicinity of its zinc finger 4 (ZF4) within the DNA binding domain, thereby augmenting its nuclear localization and sequence-specific DNA binding activity. Our results further demonstrate that BTK-induced activating phosphorylation is critical for the optimal transcription factor function of Ikaros. PMID:23977012

  19. Constructing graph models for software system development and analysis

    NASA Astrophysics Data System (ADS)

    Pogrebnoy, Andrey V.

    2017-01-01

    We propose a concept for creating the instrumentation for functional and structural decisions rationale during the software system (SS) development. We propose to develop SS simultaneously on two models - functional (FM) and structural (SM). FM is a source code of the SS. Adequate representation of the FM in the form of a graph model (GM) is made automatically and called SM. The problem of creating and visualizing GM is considered from the point of applying it as a uniform platform for the adequate representation of the SS source code. We propose three levels of GM detailing: GM1 - for visual analysis of the source code and for SS version control, GM2 - for resources optimization and analysis of connections between SS components, GM3 - for analysis of the SS functioning in dynamics. The paper includes examples of constructing all levels of GM.

  20. Optimization of response surface and neural network models in conjugation with desirability function for estimation of nutritional needs of methionine, lysine, and threonine in broiler chickens.

    PubMed

    Mehri, Mehran

    2014-07-01

    The optimization algorithm of a model may have significant effects on the final optimal values of nutrient requirements in poultry enterprises. In poultry nutrition, the optimal values of dietary essential nutrients are very important for feed formulation to optimize profit through minimizing feed cost and maximizing bird performance. This study was conducted to introduce a novel multi-objective algorithm, desirability function, for optimization the bird response models based on response surface methodology (RSM) and artificial neural network (ANN). The growth databases on the central composite design (CCD) were used to construct the RSM and ANN models and optimal values for 3 essential amino acids including lysine, methionine, and threonine in broiler chicks have been reevaluated using the desirable function in both analytical approaches from 3 to 16 d of age. Multi-objective optimization results showed that the most desirable function was obtained for ANN-based model (D = 0.99) where the optimal levels of digestible lysine (dLys), digestible methionine (dMet), and digestible threonine (dThr) for maximum desirability were 13.2, 5.0, and 8.3 g/kg of diet, respectively. However, the optimal levels of dLys, dMet, and dThr in the RSM-based model were estimated at 11.2, 5.4, and 7.6 g/kg of diet, respectively. This research documented that the application of ANN in the broiler chicken model along with a multi-objective optimization algorithm such as desirability function could be a useful tool for optimization of dietary amino acids in fractional factorial experiments, in which the use of the global desirability function may be able to overcome the underestimations of dietary amino acids resulting from the RSM model. © 2014 Poultry Science Association Inc.

Top