Mauda, R.; Pinchas, M.
2014-01-01
Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813
Kanarska, Yuliya; Walton, Otis
2015-11-30
Fluid-granular flows are common phenomena in nature and industry. Here, an efficient computational technique based on the distributed Lagrange multiplier method is utilized to simulate complex fluid-granular flows. Each particle is explicitly resolved on an Eulerian grid as a separate domain, using solid volume fractions. The fluid equations are solved through the entire computational domain, however, Lagrange multiplier constrains are applied inside the particle domain such that the fluid within any volume associated with a solid particle moves as an incompressible rigid body. The particle–particle interactions are implemented using explicit force-displacement interactions for frictional inelastic particles similar to the DEMmore » method with some modifications using the volume of an overlapping region as an input to the contact forces. Here, a parallel implementation of the method is based on the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) library.« less
Kamensky, David; Evans, John A; Hsu, Ming-Chen; Bazilevs, Yuri
2017-11-01
This paper discusses a method of stabilizing Lagrange multiplier fields used to couple thin immersed shell structures and surrounding fluids. The method retains essential conservation properties by stabilizing only the portion of the constraint orthogonal to a coarse multiplier space. This stabilization can easily be applied within iterative methods or semi-implicit time integrators that avoid directly solving a saddle point problem for the Lagrange multiplier field. Heart valve simulations demonstrate applicability of the proposed method to 3D unsteady simulations. An appendix sketches the relation between the proposed method and a high-order-accurate approach for simpler model problems.
NASA Technical Reports Server (NTRS)
Watts, G.
1992-01-01
A programming technique to eliminate computational instability in multibody simulations that use the Lagrange multiplier is presented. The computational instability occurs when the attached bodies drift apart and violate the constraints. The programming technique uses the constraint equation, instead of integration, to determine the coordinates that are not independent. Although the equations of motion are unchanged, a complete derivation of the incorporation of the Lagrange multiplier into the equation of motion for two bodies is presented. A listing of a digital computer program which uses the programming technique to eliminate computational instability is also presented. The computer program simulates a solid rocket booster and parachute connected by a frictionless swivel.
A Lagrange multiplier and Hopfield-type barrier function method for the traveling salesman problem.
Dang, Chuangyin; Xu, Lei
2002-02-01
A Lagrange multiplier and Hopfield-type barrier function method is proposed for approximating a solution of the traveling salesman problem. The method is derived from applications of Lagrange multipliers and a Hopfield-type barrier function and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the method searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that lower and upper bounds on variables are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the method converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the method seems more effective and efficient than the softassign algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xing; Lin, Guang; Zou, Jianfeng
To model red blood cell (RBC) deformation in flow, the recently developed LBM-DLM/FD method ([Shi and Lim, 2007)29], derived from the lattice Boltzmann method and the distributed Lagrange multiplier/fictitious domain methodthe fictitious domain method, is extended to employ the mesoscopic network model for simulations of red blood cell deformation. The flow is simulated by the lattice Boltzmann method with an external force, while the network model is used for modeling red blood cell deformation and the fluid-RBC interaction is enforced by the Lagrange multiplier. To validate parameters of the RBC network model, sThe stretching numerical tests on both coarse andmore » fine meshes are performed and compared with the corresponding experimental data to validate the parameters of the RBC network model. In addition, RBC deformation in pipe flow and in shear flow is simulated, revealing the capacity of the current method for modeling RBC deformation in various flows.« less
1982-04-01
S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES
Distributed-Lagrange-Multiplier-based computational method for particulate flow with collisions
NASA Astrophysics Data System (ADS)
Ardekani, Arezoo; Rangel, Roger
2006-11-01
A Distributed-Lagrange-Multiplier-based computational method is developed for colliding particles in a solid-fluid system. A numerical simulation is conducted in two dimensions using the finite volume method. The entire domain is treated as a fluid but the fluid in the particle domains satisfies a rigidity constraint. We present an efficient method for predicting the collision between particles. In earlier methods, a repulsive force was applied to the particles when their distance was less than a critical value. In this method, an impulsive force is computed. During the frictionless collision process between two particles, linear momentum is conserved while the tangential forces are zero. Thus, instead of satisfying a condition of rigid body motion for each particle separately, as done when particles are not in contact, both particles are rigidified together along their line of centers. Particles separate from each other when the impulsive force is less than zero and after this time, a rigidity constraint is satisfied for each particle separately. Grid independency is implemented to ensure the accuracy of the numerical simulation. A comparison between this method and previous collision strategies is presented and discussed.
Statistical analogues of thermodynamic extremum principles
NASA Astrophysics Data System (ADS)
Ramshaw, John D.
2018-05-01
As shown by Jaynes, the canonical and grand canonical probability distributions of equilibrium statistical mechanics can be simply derived from the principle of maximum entropy, in which the statistical entropy S=- {k}{{B}}{\\sum }i{p}i{log}{p}i is maximised subject to constraints on the mean values of the energy E and/or number of particles N in a system of fixed volume V. The Lagrange multipliers associated with those constraints are then found to be simply related to the temperature T and chemical potential μ. Here we show that the constrained maximisation of S is equivalent to, and can therefore be replaced by, the essentially unconstrained minimisation of the obvious statistical analogues of the Helmholtz free energy F = E ‑ TS and the grand potential J = F ‑ μN. Those minimisations are more easily performed than the maximisation of S because they formally eliminate the constraints on the mean values of E and N and their associated Lagrange multipliers. This procedure significantly simplifies the derivation of the canonical and grand canonical probability distributions, and shows that the well known extremum principles for the various thermodynamic potentials possess natural statistical analogues which are equivalent to the constrained maximisation of S.
Lagrange multiplier for perishable inventory model considering warehouse capacity planning
NASA Astrophysics Data System (ADS)
Amran, Tiena Gustina; Fatima, Zenny
2017-06-01
This paper presented Lagrange Muktiplier approach for solving perishable raw material inventory planning considering warehouse capacity. A food company faced an issue of managing perishable raw materials and marinades which have limited shelf life. Another constraint to be considered was the capacity of the warehouse. Therefore, an inventory model considering shelf life and raw material warehouse capacity are needed in order to minimize the company's inventory cost. The inventory model implemented in this study was the adapted economic order quantity (EOQ) model which is optimized using Lagrange multiplier. The model and solution approach were applied to solve a case industry in a food manufacturer. The result showed that the total inventory cost decreased 2.42% after applying the proposed approach.
Size effects in non-linear heat conduction with flux-limited behaviors
NASA Astrophysics Data System (ADS)
Li, Shu-Nan; Cao, Bing-Yang
2017-11-01
Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
2016-07-01
This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.
NASA Technical Reports Server (NTRS)
Tanner, John A.
1996-01-01
A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.
NASA Technical Reports Server (NTRS)
Tanner, John A.
1996-01-01
A computational procedure is presented for the solution of frictional contact problems for aircraft tires. A Space Shuttle nose-gear tire is modeled using a two-dimensional laminated anisotropic shell theory which includes the effects of variations in material and geometric parameters, transverse-shear deformation, and geometric nonlinearities. Contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with both contact and friction conditions. The contact-friction algorithm is based on a modified Coulomb friction law. A modified two-field, mixed-variational principle is used to obtain elemental arrays. This modification consists of augmenting the functional of that principle by two terms: the Lagrange multiplier vector associated with normal and tangential node contact-load intensities and a regularization term that is quadratic in the Lagrange multiplier vector. These capabilities and computational features are incorporated into an in-house computer code. Experimental measurements were taken to define the response of the Space Shuttle nose-gear tire to inflation-pressure loads and to inflation-pressure loads combined with normal static loads against a rigid flat plate. These experimental results describe the meridional growth of the tire cross section caused by inflation loading, the static load-deflection characteristics of the tire, the geometry of the tire footprint under static loading conditions, and the normal and tangential load-intensity distributions in the tire footprint for the various static vertical-loading conditions. Numerical results were obtained for the Space Shuttle nose-gear tire subjected to inflation pressure loads and combined inflation pressure and contact loads against a rigid flat plate. The experimental measurements and the numerical results are compared.
Dang, C; Xu, L
2001-03-01
In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.
Finite element approximation of an optimal control problem for the von Karman equations
NASA Technical Reports Server (NTRS)
Hou, L. Steven; Turner, James C.
1994-01-01
This paper is concerned with optimal control problems for the von Karman equations with distributed controls. We first show that optimal solutions exist. We then show that Lagrange multipliers may be used to enforce the constraints and derive an optimality system from which optimal states and controls may be deduced. Finally we define finite element approximations of solutions for the optimality system and derive error estimates for the approximations.
A Fluid Structure Algorithm with Lagrange Multipliers to Model Free Swimming
NASA Astrophysics Data System (ADS)
Sahin, Mehmet; Dilek, Ezgi
2017-11-01
A new monolithic approach is prosed to solve the fluid-structure interaction (FSI) problem with Lagrange multipliers in order to model free swimming/flying. In the present approach, the fluid domain is modeled by the incompressible Navier-Stokes equations and discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the stable side-centered unstructured finite volume method. The solid domain is modeled by the constitutive laws for the nonlinear Saint Venant-Kirchhoff material and the classical Galerkin finite element method is used to discretize the governing equations in a Lagrangian frame. In order to impose the body motion/deformation, the distance between the constraint pair nodes is imposed using the Lagrange multipliers, which is independent from the frame of reference. The resulting algebraic linear equations are solved in a fully coupled manner using a dual approach (null space method). The present numerical algorithm is initially validated for the classical FSI benchmark problems and then applied to the free swimming of three linked ellipses. The authors are grateful for the use of the computing resources provided by the National Center for High Performance Computing (UYBHM) under Grant Number 10752009 and the computing facilities at TUBITAK-ULAKBIM, High Performance and Grid Computing Center.
Modified Interior Distance Functions (Theory and Methods)
NASA Technical Reports Server (NTRS)
Polyak, Roman A.
1995-01-01
In this paper we introduced and developed the theory of Modified Interior Distance Functions (MIDF's). The MIDF is a Classical Lagrangian (CL) for a constrained optimization problem which is equivalent to the initial one and can be obtained from the latter by monotone transformation both the objective function and constraints. In contrast to the Interior Distance Functions (IDF's), which played a fundamental role in Interior Point Methods (IPM's), the MIDF's are defined on an extended feasible set and along with center, have two extra tools, which control the computational process: the barrier parameter and the vector of Lagrange multipliers. The extra tools allow to attach to the MEDF's very important properties of Augmented Lagrangeans. One can consider the MIDFs as Interior Augmented Lagrangeans. It makes MIDF's similar in spirit to Modified Barrier Functions (MBF's), although there is a fundamental difference between them both in theory and methods. Based on MIDF's theory, Modified Center Methods (MCM's) have been developed and analyzed. The MCM's find an unconstrained minimizer in primal space and update the Lagrange multipliers, while both the center and the barrier parameter can be fixed or updated at each step. The MCM's convergence was investigated, and their rate of convergence was estimated. The extension of the feasible set and the special role of the Lagrange multipliers allow to develop MCM's, which produce, in case of nondegenerate constrained optimization, a primal and dual sequences that converge to the primal-dual solutions with linear rate, even when both the center and the barrier parameter are fixed. Moreover, every Lagrange multipliers update shrinks the distance to the primal dual solution by a factor 0 less than gamma less than 1 which can be made as small as one wants by choosing a fixed interior point as a 'center' and a fixed but large enough barrier parameter. The numericai realization of MCM leads to the Newton MCM (NMCM). The approximation for the primal minimizer one finds by Newton Method followed by the Lagrange multipliers update. Due to the MCM convergence, when both the center and the barrier parameter are fixed, the condition of the MDF Hessism and the neighborhood of the primal ninimizer where Newton method is 'well' defined remains stable. It contributes to both the complexity and the numerical stability of the NMCM.
A Person Fit Test for IRT Models for Polytomous Items
ERIC Educational Resources Information Center
Glas, C. A. W.; Dagohoy, Anna Villa T.
2007-01-01
A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability parameters. It is shown that the Lagrange multiplier…
NASA Technical Reports Server (NTRS)
Tielking, John T.
1989-01-01
Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xing; Lin, Guang
To model the sedimentation of the red blood cell (RBC) in a square duct and a circular pipe, the recently developed technique derived from the lattice Boltzmann method and the distributed Lagrange multiplier/fictitious domain method (LBM-DLM/FD) is extended to employ the mesoscopic network model for simulations of the sedimentation of the RBC in flow. The flow is simulated by the lattice Boltzmann method with a strong magnetic body force, while the network model is used for modeling RBC deformation. The fluid-RBC interactions are enforced by the Lagrange multiplier. The sedimentation of the RBC in a square duct and a circularmore » pipe is simulated, revealing the capacity of the current method for modeling the sedimentation of RBC in various flows. Numerical results illustrate that that the terminal setting velocity increases with the increment of the exerted body force. The deformation of the RBC has significant effect on the terminal setting velocity due to the change of the frontal area. The larger the exerted force is, the smaller the frontal area and the larger deformation of the RBC are.« less
NASA Astrophysics Data System (ADS)
Amengonu, Yawo H.; Kakad, Yogendra P.
2014-07-01
Quasivelocity techniques such as Maggi's and Boltzmann-Hamel's equations eliminate Lagrange multipliers from the beginning as opposed to the Euler-Lagrange method where one has to solve for the n configuration variables and the multipliers as functions of time when there are m nonholonomic constraints. Maggi's equation produces n second-order differential equations of which (n-m) are derived using (n-m) independent quasivelocities and the time derivative of the m kinematic constraints which add the remaining m second order differential equations. This technique is applied to derive the dynamics of a differential mobile robot and a controller which takes into account these dynamics is developed.
Use of the Digamma Function in Statistical Astrophysics Distributions
NASA Astrophysics Data System (ADS)
Cahill, Michael
2017-06-01
Relaxed astrophysical statistical distributions may be constructed by using the inverse of a most probable energy distribution equation giving the energy ei of each particle in cell i in terms of the cell’s particle population Ni. The digamma mediated equation is A + Bei = Ψ(1+ Ni), where the constants A & B are Lagrange multipliers and Ψ is the digamma function given by Ψ(1+x) = dln(x!)/dx. Results are discussed for a Monatomic Ideal Gas, Atmospheres of Spherical Planets or Satellites and for Spherical Globular Clusters. These distributions are self-terminating even if other factors do not cause a cutoff. The examples are discussed classically but relativistic extensions are possible.
Comments on "The multisynapse neural network and its application to fuzzy clustering".
Yu, Jian; Hao, Pengwei
2005-05-01
In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.
Analytical Energy Gradients for Excited-State Coupled-Cluster Methods
NASA Astrophysics Data System (ADS)
Wladyslawski, Mark; Nooijen, Marcel
The equation-of-motion coupled-cluster (EOM-CC) and similarity transformed equation-of-motion coupled-cluster (STEOM-CC) methods have been firmly established as accurate and routinely applicable extensions of single-reference coupled-cluster theory to describe electronically excited states. An overview of these methods is provided, with emphasis on the many-body similarity transform concept that is the key to a rationalization of their accuracy. The main topic of the paper is the derivation of analytical energy gradients for such non-variational electronic structure approaches, with an ultimate focus on obtaining their detailed algebraic working equations. A general theoretical framework using Lagrange's method of undetermined multipliers is presented, and the method is applied to formulate the EOM-CC and STEOM-CC gradients in abstract operator terms, following the previous work in [P.G. Szalay, Int. J. Quantum Chem. 55 (1995) 151] and [S.R. Gwaltney, R.J. Bartlett, M. Nooijen, J. Chem. Phys. 111 (1999) 58]. Moreover, the systematics of the Lagrange multiplier approach is suitable for automation by computer, enabling the derivation of the detailed derivative equations through a standardized and direct procedure. To this end, we have developed the SMART (Symbolic Manipulation and Regrouping of Tensors) package of automated symbolic algebra routines, written in the Mathematica programming language. The SMART toolkit provides the means to expand, differentiate, and simplify equations by manipulation of the detailed algebraic tensor expressions directly. The Lagrangian multiplier formulation establishes a uniform strategy to perform the automated derivation in a standardized manner: A Lagrange multiplier functional is constructed from the explicit algebraic equations that define the energy in the electronic method; the energy functional is then made fully variational with respect to all of its parameters, and the symbolic differentiations directly yield the explicit equations for the wavefunction amplitudes, the Lagrange multipliers, and the analytical gradient via the perturbation-independent generalized Hellmann-Feynman effective density matrix. This systematic automated derivation procedure is applied to obtain the detailed gradient equations for the excitation energy (EE-), double ionization potential (DIP-), and double electron affinity (DEA-) similarity transformed equation-of-motion coupled-cluster singles-and-doubles (STEOM-CCSD) methods. In addition, the derivatives of the closed-shell-reference excitation energy (EE-), ionization potential (IP-), and electron affinity (EA-) equation-of-motion coupled-cluster singles-and-doubles (EOM-CCSD) methods are derived. Furthermore, the perturbative EOM-PT and STEOM-PT gradients are obtained. The algebraic derivative expressions for these dozen methods are all derived here uniformly through the automated Lagrange multiplier process and are expressed compactly in a chain-rule/intermediate-density formulation, which facilitates a unified modular implementation of analytic energy gradients for CCSD/PT-based electronic methods. The working equations for these analytical gradients are presented in full detail, and their factorization and implementation into an efficient computer code are discussed.
NASA Technical Reports Server (NTRS)
Klein, L. R.
1974-01-01
The free vibrations of elastic structures of arbitrary complexity were analyzed in terms of their component modes. The method was based upon the use of the normal unconstrained modes of the components in a Rayleigh-Ritz analysis. The continuity conditions were enforced by means of Lagrange Multipliers. Examples of the structures considered are: (1) beams with nonuniform properties; (2) airplane structures with high or low aspect ratio lifting surface components; (3) the oblique wing airplane; and (4) plate structures. The method was also applied to the analysis of modal damping of linear elastic structures. Convergence of the method versus the number of modes per component and/or the number of components is discussed and compared to more conventional approaches, ad-hoc methods, and experimental results.
Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.
2013-01-01
We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.
MaxEnt alternatives to pearson family distributions
NASA Astrophysics Data System (ADS)
Stokes, Barrie J.
2012-05-01
In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.
A fictitious domain approach for the Stokes problem based on the extended finite element method
NASA Astrophysics Data System (ADS)
Court, Sébastien; Fournié, Michel; Lozinski, Alexei
2014-01-01
In the present work, we propose to extend to the Stokes problem a fictitious domain approach inspired by eXtended Finite Element Method and studied for Poisson problem in [Renard]. The method allows computations in domains whose boundaries do not match. A mixed finite element method is used for fluid flow. The interface between the fluid and the structure is localized by a level-set function. Dirichlet boundary conditions are taken into account using Lagrange multiplier. A stabilization term is introduced to improve the approximation of the normal trace of the Cauchy stress tensor at the interface and avoid the inf-sup condition between the spaces for velocity and the Lagrange multiplier. Convergence analysis is given and several numerical tests are performed to illustrate the capabilities of the method.
Zou, Weiyao; Qi, Xiaofeng; Burns, Stephen A
2011-07-01
We implemented a Lagrange-multiplier (LM)-based damped least-squares (DLS) control algorithm in a woofer-tweeter dual deformable-mirror (DM) adaptive optics scanning laser ophthalmoscope (AOSLO). The algorithm uses data from a single Shack-Hartmann wavefront sensor to simultaneously correct large-amplitude low-order aberrations by a woofer DM and small-amplitude higher-order aberrations by a tweeter DM. We measured the in vivo performance of high resolution retinal imaging with the dual DM AOSLO. We compared the simultaneous LM-based DLS dual DM controller with both single DM controller, and a successive dual DM controller. We evaluated performance using both wavefront (RMS) and image quality metrics including brightness and power spectrum. The simultaneous LM-based dual DM AO can consistently provide near diffraction-limited in vivo routine imaging of human retina.
Mixed formulation for frictionless contact problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Kim, Kyun O.
1989-01-01
Simple mixed finite element models and a computational precedure are presented for the solution of frictionless contact problems. The analytical formulation is based on a form of Reissner's large rotation theory of the structure with the effects of transverse shear deformation included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the internal forces (stress resultants), the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The element characteristic array are obtained by using a modified form of the two-field Hellinger-Reissner mixed variational principle. The internal forces and the Lagrange multipliers are allowed to be discontinuous at interelement boundaries. The Newton-Raphson iterative scheme is used for the solution of the nonlinear algebraic equations, and the determination of the contact area and the contact pressures.
Unsteady combustion of solid propellants
NASA Astrophysics Data System (ADS)
Chung, T. J.; Kim, P. K.
The oscillatory motions of all field variables (pressure, temperature, velocity, density, and fuel fractions) in the flame zone of solid propellant rocket motors are calculated using the finite element method. The Arrhenius law with a single step forward chemical reaction is used. Effects of radiative heat transfer, impressed arbitrary acoustic wave incidence, and idealized mean flow velocities are also investigated. Boundary conditions are derived at the solid-gas interfaces and at the flame edges which are implemented via Lagrange multipliers. Perturbation expansions of all governing conservation equations up to and including the second order are carried out so that nonlinear oscillations may be accommodated. All excited frequencies are calculated by means of eigenvalue analyses, and the combustion response functions corresponding to these frequencies are determined. It is shown that the use of isoparametric finite elements, Gaussian quadrature integration, and the Lagrange multiplier boundary matrix scheme offers a convenient approach to two-dimensional calculations.
Zou, Weiyao; Burns, Stephen A.
2012-01-01
A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. PMID:22441462
An étude on global vacuum energy sequester
D’Amico, Guido; Kaloper, Nemanja; Padilla, Antonio; ...
2017-09-18
Recently two of the authors proposed a mechanism of vacuum energy sequester as a means of protecting the observable cosmological constant from quantum radiative corrections. The original proposal was based on using global Lagrange multipliers, but later a local formulation was provided. Subsequently other interesting claims of a different non-local approach to the cosmological constant problem were made, based again on global Lagrange multipliers. We examine some of these proposals and find their mutual relationship. We explain that the proposals which do not treat the cosmological constant counterterm as a dynamical variable require fine tunings to have acceptable solutions. Furthermore,more » the counterterm often needs to be retuned at every order in the loop expansion to cancel the radiative corrections to the cosmological constant, just like in standard GR. These observations are an important reminder of just how the proposal of vacuum energy sequester avoids such problems.« less
Retrieving Storm Electric Fields from Aircraft Field Mill Data. Part 1; Theory
NASA Technical Reports Server (NTRS)
Koshak, W. J.
2006-01-01
It is shown that the problem of retrieving storm electric fields from an aircraft instrumented with several electric field mill sensors can be expressed in terms of a standard Lagrange multiplier optimization problem. The method naturally removes aircraft charge from the retrieval process without having to use a high voltage stinger and linearly combined mill data values. It allows a variety of user-supplied physical constraints (the so-called side constraints in the theory of Lagrange multipliers) and also helps improve absolute calibration. Additionally, this paper introduces an alternate way of performing the absolute calibration of an aircraft that has some benefits over conventional analyses. It is accomplished by using the time derivatives of mill and pitch data for a pitch down maneuver performed at high (greater than 1 km) altitude. In Part II of this study, the above methods are tested and then applied to complete a full calibration of a Citation aircraft.
Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part I: Theory
NASA Technical Reports Server (NTRS)
Koshak, W. J.
2005-01-01
It is shown that the problem of retrieving storm electric fields from an aircraft instrumented with several electric field mill sensors can be expressed in terms of a standard Lagrange multiplier optimization problem. The method naturally removes aircraft charge from the retrieval process without having to use a high voltage stinger and linearly combined mill data values. It also allows a variety of user-supplied physical constraints (the so-called side constraints in the theory of Lagrange multipliers). Additionally, this paper introduces a novel way of performing the absolute calibration of an aircraft that has several benefits over conventional analyses. In the new approach, absolute calibration is completed by inspecting the time derivatives of mill and pitch data for a pitch down maneuver performed at high (greater than 1 km) altitude. In Part II of this study, the above methods are tested and then applied to complete a full calibration of a Citation aircraft.
A hybridized formulation for the weak Galerkin mixed finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less
A hybridized formulation for the weak Galerkin mixed finite element method
Mu, Lin; Wang, Junping; Ye, Xiu
2016-01-14
This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less
Zou, Weiyao; Burns, Stephen A
2012-03-20
A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. © 2012 Optical Society of America
A trust region-based approach to optimize triple response systems
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Fan, Chihhao; Huang, Chia-Fen
2014-05-01
This article presents a new computing procedure for the global optimization of the triple response system (TRS) where the response functions are non-convex quadratics and the input factors satisfy a radial constrained region of interest. The TRS arising from response surface modelling can be approximated using a nonlinear mathematical program that considers one primary objective function and two secondary constraint functions. An optimization algorithm named the triple response surface algorithm (TRSALG) is proposed to determine the global optimum for the non-degenerate TRS. In TRSALG, the Lagrange multipliers of the secondary functions are determined using the Hooke-Jeeves search method and the Lagrange multiplier of the radial constraint is located using the trust region method within the global optimality space. The proposed algorithm is illustrated in terms of three examples appearing in the quality-control literature. The results of TRSALG compared to a gradient-based method are also presented.
Identifying fMRI Model Violations with Lagrange Multiplier Tests
Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor
2013-01-01
The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665
Generalized ensemble theory with non-extensive statistics
NASA Astrophysics Data System (ADS)
Shen, Ke-Ming; Zhang, Ben-Wei; Wang, En-Ke
2017-12-01
The non-extensive canonical ensemble theory is reconsidered with the method of Lagrange multipliers by maximizing Tsallis entropy, with the constraint that the normalized term of Tsallis' q -average of physical quantities, the sum ∑ pjq, is independent of the probability pi for Tsallis parameter q. The self-referential problem in the deduced probability and thermal quantities in non-extensive statistics is thus avoided, and thermodynamical relationships are obtained in a consistent and natural way. We also extend the study to the non-extensive grand canonical ensemble theory and obtain the q-deformed Bose-Einstein distribution as well as the q-deformed Fermi-Dirac distribution. The theory is further applied to the generalized Planck law to demonstrate the distinct behaviors of the various generalized q-distribution functions discussed in literature.
Duality in non-linear programming
NASA Astrophysics Data System (ADS)
Jeyalakshmi, K.
2018-04-01
In this paper we consider duality and converse duality for a programming problem involving convex objective and constraint functions with finite dimensional range. We do not assume any constraint qualification. The dual is presented by reducing the problem to a standard Lagrange multiplier problem.
Assessing marginal water values in multipurpose multireservoir systems via stochastic programming
NASA Astrophysics Data System (ADS)
Tilmant, A.; Pinte, D.; Goor, Q.
2008-12-01
The International Conference on Water and the Environment held in Dublin in 1992 emphasized the need to consider water as an economic good. Since water markets are usually absent or ineffective, the value of water cannot be directly derived from market activities but must rather be assessed through shadow prices. Economists have developed various valuation techniques to determine the economic value of water, especially to handle allocation issues involving environmental water uses. Most of the nonmarket valuation studies reported in the literature focus on long-run policy problems, such as permanent (re)allocations of water, and assume that the water availability is given. When dealing with short-run allocation problems, water managers are facing complex spatial and temporal trade-offs and must therefore be able to track site and time changes in water values across different hydrologic conditions, especially in arid and semiarid areas where the availability of water is a limiting and stochastic factor. This paper presents a stochastic programming approach for assessing the statistical distribution of marginal water values in multipurpose multireservoir systems where hydropower generation and irrigation crop production are the main economic activities depending on water. In the absence of a water market, the Lagrange multipliers correspond to shadow prices, and the marginal water values are the Lagrange multipliers associated with the mass balance equations of the reservoirs. The methodology is illustrated with a cascade of hydroelectric-irrigation reservoirs in the Euphrates river basin in Turkey and Syria.
Automated Testability Decision Tool
1991-09-01
Vol. 16,1968, pp. 538-558. Bertsekas, D. P., "Constraints Optimization and Lagrange Multiplier Methods," Academic Press, New York. McLeavey , D.W... McLeavey , J.A., "Parallel Optimization Methods in Standby Reliability, " University of Connecticut, School of Business Administration, Bureau of Business
Modelling Truck Camper Production
ERIC Educational Resources Information Center
Kramlich, G. R., II; Kobylski, G.; Ahner, D.
2008-01-01
This note describes an interdisciplinary project designed to enhance students' knowledge of the basic techniques taught in a multivariable calculus course. The note discusses the four main requirements of the project and then the solutions for each requirement. Concepts covered include differentials, gradients, Lagrange multipliers, constrained…
Using Redundancy To Reduce Errors in Magnetometer Readings
NASA Technical Reports Server (NTRS)
Kulikov, Igor; Zak, Michail
2004-01-01
A method of reducing errors in noisy magnetic-field measurements involves exploitation of redundancy in the readings of multiple magnetometers in a cluster. By "redundancy"is meant that the readings are not entirely independent of each other because the relationships among the magnetic-field components that one seeks to measure are governed by the fundamental laws of electromagnetism as expressed by Maxwell's equations. Assuming that the magnetometers are located outside a magnetic material, that the magnetic field is steady or quasi-steady, and that there are no electric currents flowing in or near the magnetometers, the applicable Maxwell 's equations are delta x B = 0 and delta(raised dot) B = 0, where B is the magnetic-flux-density vector. By suitable algebraic manipulation, these equations can be shown to impose three independent constraints on the values of the components of B at the various magnetometer positions. In general, the problem of reducing the errors in noisy measurements is one of finding a set of corrected values that minimize an error function. In the present method, the error function is formulated as (1) the sum of squares of the differences between the corrected and noisy measurement values plus (2) a sum of three terms, each comprising the product of a Lagrange multiplier and one of the three constraints. The partial derivatives of the error function with respect to the corrected magnetic-field component values and the Lagrange multipliers are set equal to zero, leading to a set of equations that can be put into matrix.vector form. The matrix can be inverted to solve for a vector that comprises the corrected magnetic-field component values and the Lagrange multipliers.
NASA Astrophysics Data System (ADS)
Abbiati, Giuseppe; La Salandra, Vincenzo; Bursi, Oreste S.; Caracoglia, Luca
2018-02-01
Successful online hybrid (numerical/physical) dynamic substructuring simulations have shown their potential in enabling realistic dynamic analysis of almost any type of non-linear structural system (e.g., an as-built/isolated viaduct, a petrochemical piping system subjected to non-stationary seismic loading, etc.). Moreover, owing to faster and more accurate testing equipment, a number of different offline experimental substructuring methods, operating both in time (e.g. the impulse-based substructuring) and frequency domains (i.e. the Lagrange multiplier frequency-based substructuring), have been employed in mechanical engineering to examine dynamic substructure coupling. Numerous studies have dealt with the above-mentioned methods and with consequent uncertainty propagation issues, either associated with experimental errors or modelling assumptions. Nonetheless, a limited number of publications have systematically cross-examined the performance of the various Experimental Dynamic Substructuring (EDS) methods and the possibility of their exploitation in a complementary way to expedite a hybrid experiment/numerical simulation. From this perspective, this paper performs a comparative uncertainty propagation analysis of three EDS algorithms for coupling physical and numerical subdomains with a dual assembly approach based on localized Lagrange multipliers. The main results and comparisons are based on a series of Monte Carlo simulations carried out on a five-DoF linear/non-linear chain-like systems that include typical aleatoric uncertainties emerging from measurement errors and excitation loads. In addition, we propose a new Composite-EDS (C-EDS) method to fuse both online and offline algorithms into a unique simulator. Capitalizing from the results of a more complex case study composed of a coupled isolated tank-piping system, we provide a feasible way to employ the C-EDS method when nonlinearities and multi-point constraints are present in the emulated system.
Optimal distribution of integration time for intensity measurements in Stokes polarimetry.
Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng
2015-10-19
We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.
Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie
2016-04-04
We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.
An Example of Branching in a Variational Problem.
ERIC Educational Resources Information Center
Darbro, Wesley
1978-01-01
Investigates the shape a liquid takes, due to its surface tension while suspended upon a wire frame in zero-g, using Lagrange multipliers. Shows how the configuration of soap films so bounded are dependent upon the volume of liquid trapped in the films. (Author/GA)
The Phase Rule in a System Subject to a Pressure Gradient
NASA Astrophysics Data System (ADS)
Podladchikov, Yuri; Connolly, James; Powell, Roger; Aardvark, Alberto
2015-04-01
It can be shown by diligent application of Lagrange's method of undetermined multipliers that the phase rule in a system subject to a pressure gradient is: � + 赑 ≥ ρ. We explore the consequence of this important relationship for natural systems.
Portfolio Analysis for Vector Calculus
ERIC Educational Resources Information Center
Kaplan, Samuel R.
2015-01-01
Classic stock portfolio analysis provides an applied context for Lagrange multipliers that undergraduate students appreciate. Although modern methods of portfolio analysis are beyond the scope of vector calculus, classic methods reinforce the utility of this material. This paper discusses how to introduce classic stock portfolio analysis in a…
Three dimensional elements with Lagrange multipliers for the modified couple stress theory
NASA Astrophysics Data System (ADS)
Kwon, Young-Rok; Lee, Byung-Chai
2018-07-01
Three dimensional mixed elements for the modified couple stress theory are proposed. The C1 continuity for the displacement field, which is required because of the curvature term in the variational form of the theory, is satisfied weakly by introducing a supplementary rotation as an independent variable and constraining the relation between the rotation and the displacement with a Lagrange multiplier vector. An additional constraint about the deviatoric curvature is also considered for three dimensional problems. Weak forms with one constraint and two constraints are derived, and four elements satisfying convergence criteria are developed by applying different approximations to each field of independent variables. The elements pass a [InlineEquation not available: see fulltext.] patch test for three dimensional problems. Numerical examples show that the additional constraint could be considered essential for the three dimensional elements, and one of the elements is recommended for practical applications via the comparison of the performances of the elements. In addition, all the proposed elements can represent the size effect well.
Optimization of constrained density functional theory
NASA Astrophysics Data System (ADS)
O'Regan, David D.; Teobaldi, Gilberto
2016-07-01
Constrained density functional theory (cDFT) is a versatile electronic structure method that enables ground-state calculations to be performed subject to physical constraints. It thereby broadens their applicability and utility. Automated Lagrange multiplier optimization is necessary for multiple constraints to be applied efficiently in cDFT, for it to be used in tandem with geometry optimization, or with molecular dynamics. In order to facilitate this, we comprehensively develop the connection between cDFT energy derivatives and response functions, providing a rigorous assessment of the uniqueness and character of cDFT stationary points while accounting for electronic interactions and screening. In particular, we provide a nonperturbative proof that stable stationary points of linear density constraints occur only at energy maxima with respect to their Lagrange multipliers. We show that multiple solutions, hysteresis, and energy discontinuities may occur in cDFT. Expressions are derived, in terms of convenient by-products of cDFT optimization, for quantities such as the dielectric function and a condition number quantifying ill definition in multiple constraint cDFT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Weixiong; Wang, Yaqi; DeHart, Mark D.
2016-09-01
In this report, we present a new upwinding scheme for the multiscale capability in Rattlesnake, the MOOSE based radiation transport application. Comparing with the initial implementation of multiscale utilizing Lagrange multipliers to impose strong continuity of angular flux on interface of in-between subdomains, this scheme does not require the particular domain partitioning. This upwinding scheme introduces discontinuity of angular flux and resembles the classic upwinding technique developed for solving first order transport equation using discontinuous finite element method (DFEM) on the subdomain interfaces. Because this scheme restores the causality of radiation streaming on the interfaces, significant accuracy improvement can bemore » observed with moderate increase of the degrees of freedom comparing with the continuous method over the entire solution domain. Hybrid SN-PN is implemented and tested with this upwinding scheme. Numerical results show that the angular smoothing required by Lagrange multiplier method is not necessary for the upwinding scheme.« less
Comment on "Troublesome aspects of the Renyi-MaxEnt treatment"
NASA Astrophysics Data System (ADS)
Oikonomou, Thomas; Bagci, G. Baris
2017-11-01
Plastino et al. [Plastino et al., Phys. Rev. E 94, 012145 (2016), 10.1103/PhysRevE.94.012145] recently stated that the Rényi entropy is not suitable for thermodynamics by using functional calculus, since it leads to anomalous results unlike the Tsallis entropy. We first show that the Tsallis entropy also leads to such anomalous behaviors if one adopts the same functional calculus approach. Second, we note that one of the Lagrange multipliers is set in an ad hoc manner in the functional calculus approach of Plastino et al. Finally, the explanation for these anomalous behaviors is provided by observing that the generalized distributions obtained by Plastino et al. do not yield the ordinary canonical partition function in the appropriate limit and therefore cannot be considered as genuine generalized distributions.
Symplectic Quantization of a Reducible Theory
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize the Abelian antisymmetric tensor gauge field. It is related to a reducible theory in the sense that all of its constraints are not independent. A procedure like ghost-of-ghost of the BFV method has to be used, but in terms of Lagrange multipliers.
Spatial Dynamics and Determinants of County-Level Education Expenditure in China
ERIC Educational Resources Information Center
Gu, Jiafeng
2012-01-01
In this paper, a multivariate spatial autoregressive model of local public education expenditure determination with autoregressive disturbance is developed and estimated. The existence of spatial interdependence is tested using Moran's I statistic and Lagrange multiplier test statistics for both the spatial error and spatial lag models. The full…
Comment on "Troublesome aspects of the Renyi-MaxEnt treatment".
Oikonomou, Thomas; Bagci, G Baris
2017-11-01
Plastino et al. [Plastino et al., Phys. Rev. E 94, 012145 (2016)1539-375510.1103/PhysRevE.94.012145] recently stated that the Rényi entropy is not suitable for thermodynamics by using functional calculus, since it leads to anomalous results unlike the Tsallis entropy. We first show that the Tsallis entropy also leads to such anomalous behaviors if one adopts the same functional calculus approach. Second, we note that one of the Lagrange multipliers is set in an ad hoc manner in the functional calculus approach of Plastino et al. Finally, the explanation for these anomalous behaviors is provided by observing that the generalized distributions obtained by Plastino et al. do not yield the ordinary canonical partition function in the appropriate limit and therefore cannot be considered as genuine generalized distributions.
NASA Astrophysics Data System (ADS)
Martínez, Sonia; Cortés, Jorge; de León, Manuel
2000-04-01
A vakonomic mechanical system can be alternatively described by an extended Lagrangian using the Lagrange multipliers as new variables. Since this extended Lagrangian is singular, the constraint algorithm can be applied and a Dirac bracket giving the evolution of the observables can be constructed.
Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods
ERIC Educational Resources Information Center
Merkle, Edgar C.; Zeileis, Achim
2013-01-01
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…
Lagrange Multipliers, Adjoint Equations, the Pontryagin Maximum Principle and Heuristic Proofs
ERIC Educational Resources Information Center
Ollerton, Richard L.
2013-01-01
Deeper understanding of important mathematical concepts by students may be promoted through the (initial) use of heuristic proofs, especially when the concepts are also related back to previously encountered mathematical ideas or tools. The approach is illustrated by use of the Pontryagin maximum principle which is then illuminated by reference to…
Three-Dimensional Profiles Using a Spherical Cutting Bit: Problem Solving in Practice
ERIC Educational Resources Information Center
Ollerton, Richard L.; Iskov, Grant H.; Shannon, Anthony G.
2002-01-01
An engineering problem concerned with relating the coordinates of the centre of a spherical cutting tool to the actual cutting surface leads to a potentially rich example of problem-solving techniques. Basic calculus, Lagrange multipliers and vector calculus techniques are employed to produce solutions that may be compared to better understand…
Volatility in GARCH Models of Business Tendency Index
NASA Astrophysics Data System (ADS)
Wahyuni, Dwi A. S.; Wage, Sutarman; Hartono, Ateng
2018-01-01
This paper aims to obtain a model of business tendency index by considering volatility factor. Volatility factor detected by ARCH (Autoregressive Conditional Heteroscedasticity). The ARCH checking was performed using the Lagrange multiplier test. The modeling is Generalized Autoregressive Conditional Heteroscedasticity (GARCH) are able to overcome volatility problems by incorporating past residual elements and residual variants.
NASA Astrophysics Data System (ADS)
Cafiero, M.; Lloberas-Valls, O.; Cante, J.; Oliver, J.
2016-04-01
A domain decomposition technique is proposed which is capable of properly connecting arbitrary non-conforming interfaces. The strategy essentially consists in considering a fictitious zero-width interface between the non-matching meshes which is discretized using a Delaunay triangulation. Continuity is satisfied across domains through normal and tangential stresses provided by the discretized interface and inserted in the formulation in the form of Lagrange multipliers. The final structure of the global system of equations resembles the dual assembly of substructures where the Lagrange multipliers are employed to nullify the gap between domains. A new approach to handle floating subdomains is outlined which can be implemented without significantly altering the structure of standard industrial finite element codes. The effectiveness of the developed algorithm is demonstrated through a patch test example and a number of tests that highlight the accuracy of the methodology and independence of the results with respect to the framework parameters. Considering its high degree of flexibility and non-intrusive character, the proposed domain decomposition framework is regarded as an attractive alternative to other established techniques such as the mortar approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Generalized Gibbs distribution and energy localization in the semiclassical FPU problem
NASA Astrophysics Data System (ADS)
Hipolito, Rafael; Danshita, Ippei; Oganesyan, Vadim; Polkovnikov, Anatoli
2011-03-01
We investigate dynamics of the weakly interacting quantum mechanical Fermi-Pasta-Ulam (qFPU) model in the semiclassical limit below the stochasticity threshold. Within this limit we find that initial quantum fluctuations lead to the damping of FPU oscillations and relaxation of the system to a slowly evolving steady state with energy localized within few momentum modes. We find that in large systems this state can be described by the generalized Gibbs ensemble (GGE), with the Lagrange multipliers being very weak functions of time. This ensembles gives accurate description of the instantaneous correlation functions, both quadratic and quartic. Based on these results we conjecture that GGE generically appears as a prethermalized state in weakly non-integrable systems.
Coupled structural, thermal, phase-change and electromagnetic analysis for superconductors, volume 2
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.; Farhat, Charbel; Park, K. C.; Militello, Carmelo; Schuler, James J.
1993-01-01
Two families of parametrized mixed variational principles for linear electromagnetodynamics are constructed. The first family is applicable when the current density distribution is known a priori. Its six independent fields are magnetic intensity and flux density, magnetic potential, electric intensity and flux density and electric potential. Through appropriate specialization of parameters the first principle reduces to more conventional principles proposed in the literature. The second family is appropriate when the current density distribution and a conjugate Lagrange multiplier field are adjoined, giving a total of eight independently varied fields. In this case it is shown that a conventional variational principle exists only in the time-independent (static) case. Several static functionals with reduced number of varied fields are presented. The application of one of these principles to construct finite elements with current prediction capabilities is illustrated with a numerical example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Rui; Zhang, Yingchen
Designing market mechanisms for electricity distribution systems has been a hot topic due to the increased presence of smart loads and distributed energy resources (DERs) in distribution systems. The distribution locational marginal pricing (DLMP) methodology is one of the real-time pricing methods to enable such market mechanisms and provide economic incentives to active market participants. Determining the DLMP is challenging due to high power losses, the voltage volatility, and the phase imbalance in distribution systems. Existing DC Optimal Power Flow (OPF) approaches are unable to model power losses and the reactive power, while single-phase AC OPF methods cannot capture themore » phase imbalance. To address these challenges, in this paper, a three-phase AC OPF based approach is developed to define and calculate DLMP accurately. The DLMP is modeled as the marginal cost to serve an incremental unit of demand at a specific phase at a certain bus, and is calculated using the Lagrange multipliers in the three-phase AC OPF formulation. Extensive case studies have been conducted to understand the impact of system losses and the phase imbalance on DLMPs as well as the potential benefits of flexible resources.« less
ERIC Educational Resources Information Center
Green, Samuel B.; Thompson, Marilyn S.; Poirier, Jennifer
1999-01-01
The use of Lagrange multiplier (LM) tests in specification searches and the efforts that involve the addition of extraneous parameters to models are discussed. Presented are a rationale and strategy for conducting specification searches in two stages that involve adding parameters to LM tests to maximize fit and then deleting parameters not needed…
Structural optimization of large structural systems by optimality criteria methods
NASA Technical Reports Server (NTRS)
Berke, Laszlo
1992-01-01
The fundamental concepts of the optimality criteria method of structural optimization are presented. The effect of the separability properties of the objective and constraint functions on the optimality criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the method.
Higher order sensitivity of solutions to convex programming problems without strict complementarity
NASA Technical Reports Server (NTRS)
Malanowski, Kazimierz
1988-01-01
Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Paraschivoiu, Marius
1998-01-01
We present a finite element technique for the efficient generation of lower and upper bounds to outputs which are linear functionals of the solutions to the incompressible Stokes equations in two space dimensions; the finite element discretization is effected by Crouzeix-Raviart elements, the discontinuous pressure approximation of which is central to our approach. The bounds are based upon the construction of an augmented Lagrangian: the objective is a quadratic "energy" reformulation of the desired output; the constraints are the finite element equilibrium equations (including the incompressibility constraint), and the intersubdomain continuity conditions on velocity. Appeal to the dual max-min problem for appropriately chosen candidate Lagrange multipliers then yields inexpensive bounds for the output associated with a fine-mesh discretization; the Lagrange multipliers are generated by exploiting an associated coarse-mesh approximation. In addition to the requisite coarse-mesh calculations, the bound technique requires solution only of local subdomain Stokes problems on the fine-mesh. The method is illustrated for the Stokes equations, in which the outputs of interest are the flowrate past, and the lift force on, a body immersed in a channel.
Analysis and computation of a least-squares method for consistent mesh tying
Day, David; Bochev, Pavel
2007-07-10
We report in the finite element method, a standard approach to mesh tying is to apply Lagrange multipliers. If the interface is curved, however, discretization generally leads to adjoining surfaces that do not coincide spatially. Straightforward Lagrange multiplier methods lead to discrete formulations failing a first-order patch test [T.A. Laursen, M.W. Heinstein, Consistent mesh-tying methods for topologically distinct discretized surfaces in non-linear solid mechanics, Internat. J. Numer. Methods Eng. 57 (2003) 1197–1242]. This paper presents a theoretical and computational study of a least-squares method for mesh tying [P. Bochev, D.M. Day, A least-squares method for consistent mesh tying, Internat. J.more » Numer. Anal. Modeling 4 (2007) 342–352], applied to the partial differential equation -∇ 2φ+αφ=f. We prove optimal convergence rates for domains represented as overlapping subdomains and show that the least-squares method passes a patch test of the order of the finite element space by construction. To apply the method to subdomain configurations with gaps and overlaps we use interface perturbations to eliminate the gaps. Finally, theoretical error estimates are illustrated by numerical experiments.« less
NASA Technical Reports Server (NTRS)
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
Troublesome aspects of the Renyi-MaxEnt treatment.
Plastino, A; Rocca, M C; Pennini, F
2016-07-01
We study in great detail the possible existence of a Renyi-associated thermodynamics, with negative results. In particular, we uncover a hidden relation in Renyi's variational problem (MaxEnt). This relation connects the two associated Lagrange multipliers (canonical ensemble) with the mean energy 〈U〉 and the Renyi parameter α. As a consequence of such relation, we obtain anomalous Renyi-MaxEnt thermodynamic results.
NASA Astrophysics Data System (ADS)
Hu, Mengsu; Wang, Yuan; Rutqvist, Jonny
2015-06-01
One major challenge in modeling groundwater flow within heterogeneous geological media is that of modeling arbitrarily oriented or intersected boundaries and inner material interfaces. The Numerical Manifold Method (NMM) has recently emerged as a promising method for such modeling, in its ability to handle boundaries, its flexibility in constructing physical cover functions (continuous or with gradient jump), its meshing efficiency with a fixed mathematical mesh (covers), its convenience for enhancing approximation precision, and its integration precision, achieved by simplex integration. In this paper, we report on developing and comparing two new approaches for boundary constraints using the NMM, namely a continuous approach with jump functions and a discontinuous approach with Lagrange multipliers. In the discontinuous Lagrange multiplier method (LMM), the material interfaces are regarded as discontinuities which divide mathematical covers into different physical covers. We define and derive stringent forms of Lagrange multipliers to link the divided physical covers, thus satisfying the continuity requirement of the refraction law. In the continuous Jump Function Method (JFM), the material interfaces are regarded as inner interfaces contained within physical covers. We briefly define jump terms to represent the discontinuity of the head gradient across an interface to satisfy the refraction law. We then make a theoretical comparison between the two approaches in terms of global degrees of freedom, treatment of multiple material interfaces, treatment of small area, treatment of moving interfaces, the feasibility of coupling with mechanical analysis and applicability to other numerical methods. The newly derived boundary-constraint approaches are coded into a NMM model for groundwater flow analysis, and tested for precision and efficiency on different simulation examples. We first test the LMM for a Dirichlet boundary and then test both LMM and JFM for an idealized heterogeneous model, comparing the numerical results with analytical solutions. Then we test both approaches for a heterogeneous model and compare the results of hydraulic head and specific discharge. We show that both approaches are suitable for modeling material boundaries, considering high accuracy for the boundary constraints, the capability to deal with arbitrarily oriented or complexly intersected boundaries, and their efficiency using a fixed mathematical mesh.
Diffuse interface models of locally inextensible vesicles in a viscous fluid
Aland, Sebastian; Egerer, Sabine; Lowengrub, John; Voigt, Axel
2014-01-01
We present a new diffuse interface model for the dynamics of inextensible vesicles in a viscous fluid with inertial forces. A new feature of this work is the implementation of the local inextensibility condition in the diffuse interface context. Local inextensibility is enforced by using a local Lagrange multiplier, which provides the necessary tension force at the interface. We introduce a new equation for the local Lagrange multiplier whose solution essentially provides a harmonic extension of the multiplier off the interface while maintaining the local inextensibility constraint near the interface. We also develop a local relaxation scheme that dynamically corrects local stretching/compression errors thereby preventing their accumulation. Asymptotic analysis is presented that shows that our new system converges to a relaxed version of the inextensible sharp interface model. This is also verified numerically. To solve the equations, we use an adaptive finite element method with implicit coupling between the Navier-Stokes and the diffuse interface inextensibility equations. Numerical simulations of a single vesicle in a shear flow at different Reynolds numbers demonstrate that errors in enforcing local inextensibility may accumulate and lead to large differences in the dynamics in the tumbling regime and smaller differences in the inclination angle of vesicles in the tank-treading regime. The local relaxation algorithm is shown to prevent the accumulation of stretching and compression errors very effectively. Simulations of two vesicles in an extensional flow show that local inextensibility plays an important role when vesicles are in close proximity by inhibiting fluid drainage in the near contact region. PMID:25246712
Preconditioned alternating direction method of multipliers for inverse problems with constraints
NASA Astrophysics Data System (ADS)
Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie
2017-02-01
We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method.
Three-Phase AC Optimal Power Flow Based Distribution Locational Marginal Price: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Rui; Zhang, Yingchen
2017-05-17
Designing market mechanisms for electricity distribution systems has been a hot topic due to the increased presence of smart loads and distributed energy resources (DERs) in distribution systems. The distribution locational marginal pricing (DLMP) methodology is one of the real-time pricing methods to enable such market mechanisms and provide economic incentives to active market participants. Determining the DLMP is challenging due to high power losses, the voltage volatility, and the phase imbalance in distribution systems. Existing DC Optimal Power Flow (OPF) approaches are unable to model power losses and the reactive power, while single-phase AC OPF methods cannot capture themore » phase imbalance. To address these challenges, in this paper, a three-phase AC OPF based approach is developed to define and calculate DLMP accurately. The DLMP is modeled as the marginal cost to serve an incremental unit of demand at a specific phase at a certain bus, and is calculated using the Lagrange multipliers in the three-phase AC OPF formulation. Extensive case studies have been conducted to understand the impact of system losses and the phase imbalance on DLMPs as well as the potential benefits of flexible resources.« less
Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.
New Forms of BRST Symmetry in Rigid Rotor
NASA Astrophysics Data System (ADS)
Rai, Sumit Kumar; Mandal, Bhabani Prasad
We derive the different forms of BRST symmetry by using the Batalin-Fradkin-Vilkovisky formalism in a rigid rotor. The so-called "dual-BRST" symmetry is obtained from the usual BRST symmetry by making a canonical transformation in the ghost sector. On the other hand, a canonical transformation in the sector involving Lagrange multiplier and its corresponding momentum leads to a new form of BRST as well as dual-BRST symmetry.
Towards Long-Time Simulation of Soft Tissue Simulant Penetration
2008-12-01
materials involved in testing. Experiments, for instance firing high speed bullets at steel plates of different thicknesses (see [2]), reveal large...L’ shaped beam against a rigid wall using AVI and the almost exact en- ergy conservation of the system . With traditional time integrators, the time...and avoiding ill-conditioning issues is often non trivial. Likewise, Lagrange multipliers have also been used to impose the contact con- straint at
Antunes, J; Debut, V
2017-02-01
Most musical instruments consist of dynamical subsystems connected at a number of constraining points through which energy flows. For physical sound synthesis, one important difficulty deals with enforcing these coupling constraints. While standard techniques include the use of Lagrange multipliers or penalty methods, in this paper, a different approach is explored, the Udwadia-Kalaba (U-K) formulation, which is rooted on analytical dynamics but avoids the use of Lagrange multipliers. This general and elegant formulation has been nearly exclusively used for conceptual systems of discrete masses or articulated rigid bodies, namely, in robotics. However its natural extension to deal with continuous flexible systems is surprisingly absent from the literature. Here, such a modeling strategy is developed and the potential of combining the U-K equation for constrained systems with the modal description is shown, in particular, to simulate musical instruments. Objectives are twofold: (1) Develop the U-K equation for constrained flexible systems with subsystems modelled through unconstrained modes; and (2) apply this framework to compute string/body coupled dynamics. This example complements previous work [Debut, Antunes, Marques, and Carvalho, Appl. Acoust. 108, 3-18 (2016)] on guitar modeling using penalty methods. Simulations show that the proposed technique provides similar results with a significant improvement in computational efficiency.
Predicting the safe load on backpacker's arm using Lagrange multipliers method
NASA Astrophysics Data System (ADS)
Abdalla, Faisal Saleh; Rambely, Azmin Sham
2014-09-01
In this study, a technique has been suggested to reduce a backpack load by transmitting determined loads to the children arm. The purpose of this paper is to estimate school children arm muscles while load carriage as well as to determine the safe load can be carried at wrist while walking with backpack. A mathematical model, as three DOFs model, was investigated in the sagittal plane and Lagrange multipliers method (LMM) was utilized to minimize a quadratic objective function of muscle forces. The muscle forces were minimized with three different load conditions which are termed as 0-L=0 N, 1-L=21.95 N, and 2-L=43.9 N. The investigated muscles were estimated and compared to their maximum forces throughout the load conditions. Flexor and extensor muscles were estimated and the results showed that flexor muscles were active while extensor muscles showed inactivity. The estimated muscle forces were didn't exceed their maximum forces with 0-L and 1-L conditions whereas biceps and FCR muscles exceeded their maximum forces with 2-L condition. Consequently, 1-L condition is quiet safe to be carried by hand whereas 2-L condition is not. Thus to reduce the load in the backpack the transmitted load shouldn't exceed 1-L condition.
A general-purpose approach to computer-aided dynamic analysis of a flexible helicopter
NASA Technical Reports Server (NTRS)
Agrawal, Om P.
1988-01-01
A general purpose mathematical formulation is described for dynamic analysis of a helicopter consisting of flexible and/or rigid bodies that undergo large translations and rotations. Rigid body and elastic sets of generalized coordinates are used. The rigid body coordinates define the location and the orientation of a body coordinate frame (global frame) with respect to an inertial frame. The elastic coordinates are introduced using a finite element approach in order to model flexible components. The compatibility conditions between two adjacent elements in a flexible body are imposed using a Boolean matrix, whereas the compatibility conditions between two adjacent bodies are imposed using the Lagrange multiplier approach. Since the form of the constraint equations depends upon the type of kinematic joint and involves only the generalized coordinates of the two participating elements, then a library of constraint elements can be developed to impose the kinematic constraint in an automated fashion. For the body constraints, the Lagrange multipliers yield the reaction forces and torques of the bodies at the joints. The virtual work approach is used to derive the equations of motion, which are a system of differential and algebraic equations that are highly nonlinear. The formulation presented is general and is compared with hard-wired formulations commonly used in helicopter analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Datta, Dipayan, E-mail: datta@uni-mainz.de; Gauss, Jürgen, E-mail: gauss@uni-mainz.de
2014-09-14
An analytic scheme is presented for the evaluation of first derivatives of the energy for a unitary group based spin-adapted coupled cluster (CC) theory, namely, the combinatoric open-shell CC (COSCC) approach within the singles and doubles approximation. The widely used Lagrange multiplier approach is employed for the derivation of an analytical expression for the first derivative of the energy, which in combination with the well-established density-matrix formulation, is used for the computation of first-order electrical properties. Derivations of the spin-adapted lambda equations for determining the Lagrange multipliers and the expressions for the spin-free effective density matrices for the COSCC approachmore » are presented. Orbital-relaxation effects due to the electric-field perturbation are treated via the Z-vector technique. We present calculations of the dipole moments for a number of doublet radicals in their ground states using restricted open-shell Hartree-Fock (ROHF) and quasi-restricted HF (QRHF) orbitals in order to demonstrate the applicability of our analytic scheme for computing energy derivatives. We also report calculations of the chlorine electric-field gradients and nuclear quadrupole-coupling constants for the CCl, CH{sub 2}Cl, ClO{sub 2}, and SiCl radicals.« less
Unveiling the proton spin decomposition at a future electron-ion collider
Aschenauer, Elke C.; Sassot, Rodolfo; Stratmann, Marco
2015-11-24
We present a detailed assessment of how well a future electron-ion collider could constrain helicity parton distributions in the nucleon and, therefore, unveil the role of the intrinsic spin of quarks and gluons in the proton’s spin budget. Any remaining deficit in this decomposition will provide the best indirect constraint on the contribution due to the total orbital angular momenta of quarks and gluons. Specifically, all our studies are performed in the context of global QCD analyses based on realistic pseudodata and in the light of the most recent data obtained from polarized proton-proton collisions at BNL-RHIC that have providedmore » evidence for a significant gluon polarization in the accessible, albeit limited range of momentum fractions. We also present projections on what can be achieved on the gluon’s helicity distribution by the end of BNL-RHIC operations. As a result, all estimates of current and projected uncertainties are performed with the robust Lagrange multiplier technique.« less
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2018-01-01
This study applies the theoretical findings of circularly-symmetric complex normal ratio distribution Yan and Ren (2016) [1,2] to transmissibility-based modal analysis from a statistical viewpoint. A probabilistic model of transmissibility function in the vicinity of the resonant frequency is formulated in modal domain, while some insightful comments are offered. It theoretically reveals that the statistics of transmissibility function around the resonant frequency is solely dependent on 'noise-to-signal' ratio and mode shapes. As a sequel to the development of the probabilistic model of transmissibility function in modal domain, this study poses the process of modal identification in the context of Bayesian framework by borrowing a novel paradigm. Implementation issues unique to the proposed approach are resolved by Lagrange multiplier approach. Also, this study explores the possibility of applying Bayesian analysis in distinguishing harmonic components and structural ones. The approaches are verified through simulated data and experimentally testing data. The uncertainty behavior due to variation of different factors is also discussed in detail.
NASA Technical Reports Server (NTRS)
Broucke, R.; Lass, H.
1975-01-01
It is shown that it is possible to make a change of variables in a Lagrangian in such a way that the number of variables is increased. The Euler-Lagrange equations in the redundant variables are obtained in the standard way (without the use of Lagrange multipliers). These equations are not independent but they are all valid and consistent. In some cases they are simpler than if the minimum number of variables are used. The redundant variables are supposed to be related to each other by several constraints (not necessarily holonomic), but these constraints are not used in the derivation of the equations of motion. The method is illustrated with the well known Kustaanheimo-Stiefel regularization. Some interesting applications to perturbation theory are also described.
NASA Astrophysics Data System (ADS)
Aiyoshi, Eitaro; Masuda, Kazuaki
On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.
Maggi's equations of motion and the determination of constraint reactions
NASA Astrophysics Data System (ADS)
Papastavridis, John G.
1990-04-01
This paper presents a geometrical derivation of the constraint reaction-free equations of Maggi for mechanical systems subject to linear (first-order) nonholonomic and/or holonomic constraints. These results follow directly from the proper application of the concepts of virtual displacement and quasi-coordinates to the variational equation of motion, i.e., Lagrange's principle. The method also makes clear how to compute the constraint reactions (kinetostatics) without introducing Lagrangian multipliers.
Effects of DoD Engagements in Collaborative Humanitarian Assistance
2013-09-01
Breusch - Pagan (BP) test , which tests for heteroscedasticity in panel data using Lagrange Multipliers. The null hypothesis for the BP test is that...Two Stage Least Squares AOR Area of Responsibility BP Breusch - Pagan COCOM Combatant Command COMPACT Compact of Free Association DoD...homoscedasticity is present ( Breusch & Pagan , 1979, p. 1288). Each fixed effect, “CountryName,” “FiscalYear,”and the combined effect of both variables, was
Quantum canonical ensemble: A projection operator approach
NASA Astrophysics Data System (ADS)
Magnus, Wim; Lemmens, Lucien; Brosens, Fons
2017-09-01
Knowing the exact number of particles N, and taking this knowledge into account, the quantum canonical ensemble imposes a constraint on the occupation number operators. The constraint particularly hampers the systematic calculation of the partition function and any relevant thermodynamic expectation value for arbitrary but fixed N. On the other hand, fixing only the average number of particles, one may remove the above constraint and simply factorize the traces in Fock space into traces over single-particle states. As is well known, that would be the strategy of the grand-canonical ensemble which, however, comes with an additional Lagrange multiplier to impose the average number of particles. The appearance of this multiplier can be avoided by invoking a projection operator that enables a constraint-free computation of the partition function and its derived quantities in the canonical ensemble, at the price of an angular or contour integration. Introduced in the recent past to handle various issues related to particle-number projected statistics, the projection operator approach proves beneficial to a wide variety of problems in condensed matter physics for which the canonical ensemble offers a natural and appropriate environment. In this light, we present a systematic treatment of the canonical ensemble that embeds the projection operator into the formalism of second quantization while explicitly fixing N, the very number of particles rather than the average. Being applicable to both bosonic and fermionic systems in arbitrary dimensions, transparent integral representations are provided for the partition function ZN and the Helmholtz free energy FN as well as for two- and four-point correlation functions. The chemical potential is not a Lagrange multiplier regulating the average particle number but can be extracted from FN+1 -FN, as illustrated for a two-dimensional fermion gas.
NASA Technical Reports Server (NTRS)
Doyle, G. R., Jr.; Burbick, J. W.
1974-01-01
The equations of motion and a computer program for the dynamics of a six degree of freedom body joined to a five degree of freedom body by a quasilinear elastic tether are presented. The forebody is assumed to be a completely general rigid body with six degrees of freedom; the decelerator is also assumed to be rigid, but with only five degrees of freedom (symmetric about its longitudinal axis). The tether is represented by a spring and dashpot in parallel, where the spring constant is a function of tether elongation. Lagrange's equation is used to derive the equations of motion with the Lagrange multiplier technique used to express the constraint provided by the tether. A computer program is included which provides a time history of the dynamics of both bodies and the tension in the tether.
The curious case of large-N expansions on a (pseudo)sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polyakov, Alexander M.; Saleem, Zain H.; Stokes, James
We elucidate the large-N dynamics of one-dimensional sigma models with spherical and hyperbolic target spaces and find a duality between the Lagrange multiplier and the angular momentum. In the hyperbolic model we propose a new class of operators based on the irreducible representations of hyperbolic space. We also uncover unexpected zero modes which lead to the double scaling of the 1/N expansion and explore these modes using Gelfand-Dikiy equations.
The curious case of large-N expansions on a (pseudo)sphere
Polyakov, Alexander M.; Saleem, Zain H.; Stokes, James
2015-02-03
We elucidate the large-N dynamics of one-dimensional sigma models with spherical and hyperbolic target spaces and find a duality between the Lagrange multiplier and the angular momentum. In the hyperbolic model we propose a new class of operators based on the irreducible representations of hyperbolic space. We also uncover unexpected zero modes which lead to the double scaling of the 1/N expansion and explore these modes using Gelfand-Dikiy equations.
Retrieving Storm Electric Fields from Aircrfaft Field Mill Data: Part II: Applications
NASA Technical Reports Server (NTRS)
Koshak, William; Mach, D. M.; Christian H. J.; Stewart, M. F.; Bateman M. G.
2006-01-01
The Lagrange multiplier theory developed in Part I of this study is applied to complete a relative calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the Lagrange multiplier method performs well in computer simulations. For mill measurement errors of 1 V m(sup -1) and a 5 V m(sup -1) error in the mean fair-weather field function, the 3D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair-weather field was also tested using computer simulations. For mill measurement errors of 1 V m(sup -l), the method retrieves the 3D storm field to within an error of about 8% if the fair-weather field estimate is typically within 1 V m(sup -1) of the true fair-weather field. Using this type of side constraint and data from fair-weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. Absolute calibration was completed using the pitch down method developed in Part I, and conventional analyses. The resulting calibration matrices were then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably in many respects with results derived from earlier (iterative) techniques of calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacquelin, Mathias; De Jong, Wibe A.; Bylaska, Eric J.
2017-07-03
The Ab Initio Molecular Dynamics (AIMD) method allows scientists to treat the dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. This extremely important method has tremendous computational requirements, because the electronic Schr¨odinger equation, approximated using Kohn-Sham Density Functional Theory (DFT), is solved at every time step. With the advent of manycore architectures, application developers have a significant amount of processing power within each compute node that can only be exploited through massive parallelism. A compute intensive application such as AIMD forms a good candidate to leverage this processing power. In this paper, wemore » focus on adding thread level parallelism to the plane wave DFT methodology implemented in NWChem. Through a careful optimization of tall-skinny matrix products, which are at the heart of the Lagrange multiplier and nonlocal pseudopotential kernels, as well as 3D FFTs, our OpenMP implementation delivers excellent strong scaling on the latest Intel Knights Landing (KNL) processor. We assess the efficiency of our Lagrange multiplier kernels by building a Roofline model of the platform, and verify that our implementation is close to the roofline for various problem sizes. Finally, we present strong scaling results on the complete AIMD simulation for a 64 water molecules test case, that scales up to all 68 cores of the Knights Landing processor.« less
Spontaneous Lorentz and diffeomorphism violation, massive modes, and gravity
NASA Astrophysics Data System (ADS)
Bluhm, Robert; Fung, Shu-Hong; Kostelecký, V. Alan
2008-03-01
Theories with spontaneous local Lorentz and diffeomorphism violation contain massless Nambu-Goldstone modes, which arise as field excitations in the minimum of the symmetry-breaking potential. If the shape of the potential also allows excitations above the minimum, then an alternative gravitational Higgs mechanism can occur in which massive modes involving the metric appear. The origin and basic properties of the massive modes are addressed in the general context involving an arbitrary tensor vacuum value. Special attention is given to the case of bumblebee models, which are gravitationally coupled vector theories with spontaneous local Lorentz and diffeomorphism violation. Mode expansions are presented in both local and spacetime frames, revealing the Nambu-Goldstone and massive modes via decomposition of the metric and bumblebee fields, and the associated symmetry properties and gauge fixing are discussed. The class of bumblebee models with kinetic terms of the Maxwell form is used as a focus for more detailed study. The nature of the associated conservation laws and the interpretation as a candidate alternative to Einstein-Maxwell theory are investigated. Explicit examples involving smooth and Lagrange-multiplier potentials are studied to illustrate features of the massive modes, including their origin, nature, dispersion laws, and effects on gravitational interactions. In the weak static limit, the massive mode and Lagrange-multiplier fields are found to modify the Newton and Coulomb potentials. The nature and implications of these modifications are examined.
Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions
Marinelli, Fabrizio; Faraldo-Gómez, José D.
2015-01-01
We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917
Shang, Fengjun; Jiang, Yi; Xiong, Anping; Su, Wen; He, Li
2016-11-18
With the integrated development of the Internet, wireless sensor technology, cloud computing, and mobile Internet, there has been a lot of attention given to research about and applications of the Internet of Things. A Wireless Sensor Network (WSN) is one of the important information technologies in the Internet of Things; it integrates multi-technology to detect and gather information in a network environment by mutual cooperation, using a variety of methods to process and analyze data, implement awareness, and perform tests. This paper mainly researches the localization algorithm of sensor nodes in a wireless sensor network. Firstly, a multi-granularity region partition is proposed to divide the location region. In the range-based method, the RSSI (Received Signal Strength indicator, RSSI) is used to estimate distance. The optimal RSSI value is computed by the Gaussian fitting method. Furthermore, a Voronoi diagram is characterized by the use of dividing region. Rach anchor node is regarded as the center of each region; the whole position region is divided into several regions and the sub-region of neighboring nodes is combined into triangles while the unknown node is locked in the ultimate area. Secondly, the multi-granularity regional division and Lagrange multiplier method are used to calculate the final coordinates. Because nodes are influenced by many factors in the practical application, two kinds of positioning methods are designed. When the unknown node is inside positioning unit, we use the method of vector similarity. Moreover, we use the centroid algorithm to calculate the ultimate coordinates of unknown node. When the unknown node is outside positioning unit, we establish a Lagrange equation containing the constraint condition to calculate the first coordinates. Furthermore, we use the Taylor expansion formula to correct the coordinates of the unknown node. In addition, this localization method has been validated by establishing the real environment.
NASA Astrophysics Data System (ADS)
Paine, Gregory Harold
1982-03-01
The primary objective of the thesis is to explore the dynamical properties of small nerve networks by means of the methods of statistical mechanics. To this end, a general formalism is developed and applied to elementary groupings of model neurons which are driven by either constant (steady state) or nonconstant (nonsteady state) forces. Neuronal models described by a system of coupled, nonlinear, first-order, ordinary differential equations are considered. A linearized form of the neuronal equations is studied in detail. A Lagrange function corresponding to the linear neural network is constructed which, through a Legendre transformation, provides a constant of motion. By invoking the Maximum-Entropy Principle with the single integral of motion as a constraint, a probability distribution function for the network in a steady state can be obtained. The formalism is implemented for some simple networks driven by a constant force; accordingly, the analysis focuses on a study of fluctuations about the steady state. In particular, a network composed of N noninteracting neurons, termed Free Thinkers, is considered in detail, with a view to interpretation and numerical estimation of the Lagrange multiplier corresponding to the constant of motion. As an archetypical example of a net of interacting neurons, the classical neural oscillator, consisting of two mutually inhibitory neurons, is investigated. It is further shown that in the case of a network driven by a nonconstant force, the Maximum-Entropy Principle can be applied to determine a probability distribution functional describing the network in a nonsteady state. The above examples are reconsidered with nonconstant driving forces which produce small deviations from the steady state. Numerical studies are performed on simplified models of two physical systems: the starfish central nervous system and the mammalian olfactory bulb. Discussions are given as to how statistical neurodynamics can be used to gain a better understanding of the behavior of these systems.
Various Forms of BRST Symmetry in Abelian 2-FORM Gauge Theory
NASA Astrophysics Data System (ADS)
Rai, Sumit Kumar; Mandal, Bhabani Prasad
We derive the various forms of BRST symmetry using Batalin-Fradkin-Vilkovisky approach in the case of Abelian 2-form gauge theory. We show that the so-called dual BRST symmetry is not an independent symmetry but the generalization of BRST symmetry obtained from the canonical transformation in the bosonic and ghost sector. We further obtain the new forms of both BRST and dual-BRST symmetry by making a general transformation in the Lagrange multipliers of the bosonic and ghost sector of the theory.
Optimization of investment portfolio weight of stocks affected by market index
NASA Astrophysics Data System (ADS)
Azizah, E.; Rusyaman, E.; Supian, S.
2017-01-01
Stock price assessment, selection of optimum combination, and measure the risk of a portfolio investment is one important issue for investors. In this paper single index model used for the assessment of the stock price, and formulation optimization model developed using Lagrange multiplier technique to determine the proportion of assets to be invested. The level of risk is estimated by using variance. These models are used to analyse the stock price data Lippo Bank and Bumi Putera.
Analytical Solution for the Free Vibration Analysis of Delaminated Timoshenko Beams
Abedi, Maryam
2014-01-01
This work presents a method to find the exact solutions for the free vibration analysis of a delaminated beam based on the Timoshenko type with different boundary conditions. The solutions are obtained by the method of Lagrange multipliers in which the free vibration problem is posed as a constrained variational problem. The Legendre orthogonal polynomials are used as the beam eigenfunctions. Natural frequencies and mode shapes of various Timoshenko beams are presented to demonstrate the efficiency of the methodology. PMID:24574879
2011-01-01
gallon. The data are cross sectional and a Breusch - Pagan test finds that heteroscedasticity is a problem. To correct for it, the analysis re...heteroscedasticity after a fixed effect model uses a Breusch and Pagan Lagrange multiplier test (Baum, 2006a). After a random effects model the test is a...EFFECTS 17 The data originate from 33 CWSs over 13 years so the next step is to test for CWS specific effects. The FE model in the table presents
A general-purpose optimization program for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Sugimoto, H.
1986-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.
Integration of progressive hedging and dual decomposition in stochastic integer programs
Watson, Jean -Paul; Guo, Ge; Hackebeil, Gabriel; ...
2015-04-07
We present a method for integrating the Progressive Hedging (PH) algorithm and the Dual Decomposition (DD) algorithm of Carøe and Schultz for stochastic mixed-integer programs. Based on the correspondence between lower bounds obtained with PH and DD, a method to transform weights from PH to Lagrange multipliers in DD is found. Fast progress in early iterations of PH speeds up convergence of DD to an exact solution. As a result, we report computational results on server location and unit commitment instances.
Variational estimate method for solving autonomous ordinary differential equations
NASA Astrophysics Data System (ADS)
Mungkasi, Sudi
2018-04-01
In this paper, we propose a method for solving first-order autonomous ordinary differential equation problems using a variational estimate formulation. The variational estimate is constructed with a Lagrange multiplier which is chosen optimally, so that the formulation leads to an accurate solution to the problem. The variational estimate is an integral form, which can be computed using a computer software. As the variational estimate is an explicit formula, the solution is easy to compute. This is a great advantage of the variational estimate formulation.
Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.
Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua
2016-11-01
This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.
A Jacobian-free Newton Krylov method for mortar-discretized thermomechanical contact problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Glen, E-mail: Glen.Hansen@inl.gov
2011-07-20
Multibody contact problems are common within the field of multiphysics simulation. Applications involving thermomechanical contact scenarios are also quite prevalent. Such problems can be challenging to solve due to the likelihood of thermal expansion affecting contact geometry which, in turn, can change the thermal behavior of the components being analyzed. This paper explores a simple model of a light water reactor nuclear fuel rod, which consists of cylindrical pellets of uranium dioxide (UO{sub 2}) fuel sealed within a Zircalloy cladding tube. The tube is initially filled with helium gas, which fills the gap between the pellets and cladding tube. Themore » accurate modeling of heat transfer across the gap between fuel pellets and the protective cladding is essential to understanding fuel performance, including cladding stress and behavior under irradiated conditions, which are factors that affect the lifetime of the fuel. The thermomechanical contact approach developed here is based on the mortar finite element method, where Lagrange multipliers are used to enforce weak continuity constraints at participating interfaces. In this formulation, the heat equation couples to linear mechanics through a thermal expansion term. Lagrange multipliers are used to formulate the continuity constraints for both heat flux and interface traction at contact interfaces. The resulting system of nonlinear algebraic equations are cast in residual form for solution of the transient problem. A Jacobian-free Newton Krylov method is used to provide for fully-coupled solution of the coupled thermal contact and heat equations.« less
A Jacobian-Free Newton Krylov Method for Mortar-Discretized Thermomechanical Contact Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glen Hansen
2011-07-01
Multibody contact problems are common within the field of multiphysics simulation. Applications involving thermomechanical contact scenarios are also quite prevalent. Such problems can be challenging to solve due to the likelihood of thermal expansion affecting contact geometry which, in turn, can change the thermal behavior of the components being analyzed. This paper explores a simple model of a light water reactor nuclear reactor fuel rod, which consists of cylindrical pellets of uranium dioxide (UO2) fuel sealed within a Zircalloy cladding tube. The tube is initially filled with helium gas, which fills the gap between the pellets and cladding tube. Themore » accurate modeling of heat transfer across the gap between fuel pellets and the protective cladding is essential to understanding fuel performance, including cladding stress and behavior under irradiated conditions, which are factors that affect the lifetime of the fuel. The thermomechanical contact approach developed here is based on the mortar finite element method, where Lagrange multipliers are used to enforce weak continuity constraints at participating interfaces. In this formulation, the heat equation couples to linear mechanics through a thermal expansion term. Lagrange multipliers are used to formulate the continuity constraints for both heat flux and interface traction at contact interfaces. The resulting system of nonlinear algebraic equations are cast in residual form for solution of the transient problem. A Jacobian-free Newton Krylov method is used to provide for fully-coupled solution of the coupled thermal contact and heat equations.« less
Voltage scheduling for low power/energy
NASA Astrophysics Data System (ADS)
Manzak, Ali
2001-07-01
Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.
Connection forces in deformable multibody dynamics
NASA Technical Reports Server (NTRS)
Shabana, A. A.; Chang, C. W.
1989-01-01
In the dynamic formulation of holonomic and nonholonomic systems based on D'Alembert-Lagrange equation, the forces of constraints are maintained in the dynamic equations by introducing auxiliary variables, called Lagrange multipliers. This approach introduces a set of generalized reaction forces associated with the system generalized coordinates. Different sets of variables can be used as generalized coordinates and accordingly, the generalized reactions associated with these generalized coordinates may not be the actual reaction forces at the joints. In rigid body dynamics, the generalized reaction forces and the actual reaction forces at the joints represent equipollent systems of forces since they produce the same total forces and moments at and about any point on the rigid body. This is not, however, the case in deformable body analyses wherein the generalized reaction forces depend on the system generalized reference and elastic coordinates. In this paper, a method for determining the actual reaction forces at the joints from the generalized reaction forces in deformable multibody systems is presented.
NASA Technical Reports Server (NTRS)
Navon, I. M.
1984-01-01
A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.
Analysis of magnetic fields using variational principles and CELAS2 elements
NASA Technical Reports Server (NTRS)
Frye, J. W.; Kasper, R. G.
1977-01-01
Prospective techniques for analyzing magnetic fields using NASTRAN are reviewed. A variational principle utilizing a vector potential function is presented which has as its Euler equations, the required field equations and boundary conditions for static magnetic fields including current sources. The need for an addition to this variational principle of a constraint condition is discussed. Some results using the Lagrange multiplier method to apply the constraint and CELAS2 elements to simulate the matrices are given. Practical considerations of using large numbers of CELAS2 elements are discussed.
Axisymmetric solid elements by a rational hybrid stress method
NASA Technical Reports Server (NTRS)
Tian, Z.; Pian, T. H. H.
1985-01-01
Four-node axisymmetric solid elements are derived by a new version of hybrid method for which the assumed stresses are expressed in complete polynomials in natural coordinates. The stress equilibrium conditions are introduced through the use of additional displacements as Lagrange multipliers. A rational procedure is to choose the displacement terms such that the resulting strains are also of complete polynomials of the same order. Example problems all indicate that elements obtained by this procedure lead to better results in displacements and stresses than that by other finite elements.
NASA Astrophysics Data System (ADS)
Scarfone, A. M.; Matsuzoe, H.; Wada, T.
2016-09-01
We show the robustness of the structure of Legendre transform in thermodynamics against the replacement of the standard linear average with the Kolmogorov-Nagumo nonlinear average to evaluate the expectation values of the macroscopic physical observables. The consequence of this statement is twofold: 1) the relationships between the expectation values and the corresponding Lagrange multipliers still hold in the present formalism; 2) the universality of the Gibbs equation as well as other thermodynamic relations are unaffected by the structure of the average used in the theory.
Analysis of reliability for multi-ring interconnection of RPR networks
NASA Astrophysics Data System (ADS)
Liu, Jia; Jin, Depeng; Zeng, Lieguang; Li, Yong
2008-11-01
In this paper, the reliability and MTTF (Mean Time to Failure) for multi-ring RPR (Resilient Packet Ring) are calculated on the conditions of single-link failures, double-link failures and no failure, respectively. The parameters such as the total number of stations N, the number of the sub-rings R, and the distribution of Ni which represents the number of the stations in the i-th sub-ring (1<=i<=R) are contained in the formulas. The relationship between the value of the reliability/MTTF and the parameters N, R and Ni is analyzed. The result shows that reliability/MTTF of the RPR multi-rings is increasing while the variance of Ni is decreasing. It is also proved that the value of the reliability/MTTF is maximum when Ni=Nj ( i ≠j and 1<=i, j<=R) by using Lagrange multipliers method, i.e. the condition of the optimal reliability of multi-ring RPR is satisfied when var(Ni) =0.
NASA Astrophysics Data System (ADS)
Masuda, Kazuaki; Aiyoshi, Eitaro
We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.
Numerical models for fluid-grains interactions: opportunities and limitations
NASA Astrophysics Data System (ADS)
Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony
2017-06-01
In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.
A novel Lagrangian approach for the stable numerical simulation of fault and fracture mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franceschini, Andrea; Ferronato, Massimiliano, E-mail: massimiliano.ferronato@unipd.it; Janna, Carlo
The simulation of the mechanics of geological faults and fractures is of paramount importance in several applications, such as ensuring the safety of the underground storage of wastes and hydrocarbons or predicting the possible seismicity triggered by the production and injection of subsurface fluids. However, the stable numerical modeling of ground ruptures is still an open issue. The present work introduces a novel formulation based on the use of the Lagrange multipliers to prescribe the constraints on the contact surfaces. The variational formulation is modified in order to take into account the frictional work along the activated fault portion accordingmore » to the principle of maximum plastic dissipation. The numerical model, developed in the framework of the Finite Element method, provides stable solutions with a fast convergence of the non-linear problem. The stabilizing properties of the proposed model are emphasized with the aid of a realistic numerical example dealing with the generation of ground fractures due to groundwater withdrawal in arid regions. - Highlights: • A numerical model is developed for the simulation of fault and fracture mechanics. • The model is implemented in the framework of the Finite Element method and with the aid of Lagrange multipliers. • The proposed formulation introduces a new contribution due to the frictional work on the portion of activated fault. • The resulting algorithm is highly non-linear as the portion of activated fault is itself unknown. • The numerical solution is validated against analytical results and proves to be stable also in realistic applications.« less
Alternative to the Palatini method: A new variational principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goenner, Hubert
2010-06-15
A variational principle is suggested within Riemannian geometry, in which an auxiliary metric and the Levi Civita connection are varied independently. The auxiliary metric plays the role of a Lagrange multiplier and introduces nonminimal coupling of matter to the curvature scalar. The field equations are 2nd order PDEs and easier to handle than those following from the so-called Palatini method. Moreover, in contrast to the latter method, no gradients of the matter variables appear. In cosmological modeling, the physics resulting from the alternative variational principle will differ from the modeling using the standard Palatini method.
Solving fractional optimal control problems within a Chebyshev-Legendre operational technique
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Ezz-Eldien, S. S.; Doha, E. H.; Abdelkawy, M. A.; Baleanu, D.
2017-06-01
In this manuscript, we report a new operational technique for approximating the numerical solution of fractional optimal control (FOC) problems. The operational matrix of the Caputo fractional derivative of the orthonormal Chebyshev polynomial and the Legendre-Gauss quadrature formula are used, and then the Lagrange multiplier scheme is employed for reducing such problems into those consisting of systems of easily solvable algebraic equations. We compare the approximate solutions achieved using our approach with the exact solutions and with those presented in other techniques and we show the accuracy and applicability of the new numerical approach, through two numerical examples.
Finite element model for brittle fracture and fragmentation
Li, Wei; Delaney, Tristan J.; Jiao, Xiangmin; ...
2016-06-01
A new computational model for brittle fracture and fragmentation has been developed based on finite element analysis of non-linear elasticity equations. The proposed model propagates the cracks by splitting the mesh nodes alongside the most over-strained edges based on the principal direction of strain tensor. To prevent elements from overlapping and folding under large deformations, robust geometrical constraints using the method of Lagrange multipliers have been incorporated. In conclusion, the model has been applied to 2D simulations of the formation and propagation of cracks in brittle materials, and the fracture and fragmentation of stretched and compressed materials.
Finite element model for brittle fracture and fragmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wei; Delaney, Tristan J.; Jiao, Xiangmin
A new computational model for brittle fracture and fragmentation has been developed based on finite element analysis of non-linear elasticity equations. The proposed model propagates the cracks by splitting the mesh nodes alongside the most over-strained edges based on the principal direction of strain tensor. To prevent elements from overlapping and folding under large deformations, robust geometrical constraints using the method of Lagrange multipliers have been incorporated. In conclusion, the model has been applied to 2D simulations of the formation and propagation of cracks in brittle materials, and the fracture and fragmentation of stretched and compressed materials.
Optimal control of a harmonic oscillator: Economic interpretations
NASA Astrophysics Data System (ADS)
Janová, Jitka; Hampel, David
2013-10-01
Optimal control is a popular technique for modelling and solving the dynamic decision problems in economics. A standard interpretation of the criteria function and Lagrange multipliers in the profit maximization problem is well known. On a particular example, we aim to a deeper understanding of the possible economic interpretations of further mathematical and solution features of the optimal control problem: we focus on the solution of the optimal control problem for harmonic oscillator serving as a model for Phillips business cycle. We discuss the economic interpretations of arising mathematical objects with respect to well known reasoning for these in other problems.
Dynamics of omnidirectional unmanned rescue vehicle with mecanum wheels
NASA Astrophysics Data System (ADS)
Typiak, Andrzej; Łopatka, Marian Janusz; Rykała, Łukasz; Kijek, Magdalena
2018-01-01
The work presents the dynamic equations of motion of a unmanned six-wheeled vehicle with mecanum wheels for rescue applications derived with the of Lagrange equations of the second kind with multipliers. Analysed vehicle through using mecanum wheels has three degrees of freedom and can move on a flat ground in any direction with any configuration of platform's frame. In order to derive dynamic equations of motion of mentioned object, kinetic potential of the system and generalized forces affecting the system are determined. The results of a solution of inverse dynamics problem are also published.
Split-step eigenvector-following technique for exploring enthalpy landscapes at absolute zero.
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra
2006-03-16
The mapping of enthalpy landscapes is complicated by the coupling of particle position and volume coordinates. To address this issue, we have developed a new split-step eigenvector-following technique for locating minima and transition points in an enthalpy landscape at absolute zero. Each iteration is split into two steps in order to independently vary system volume and relative atomic coordinates. A separate Lagrange multiplier is used for each eigendirection in order to provide maximum flexibility in determining step sizes. This technique will be useful for mapping the enthalpy landscapes of bulk systems such as supercooled liquids and glasses.
Hot forming of composite prepreg: Numerical analyses
NASA Astrophysics Data System (ADS)
Guzman-Maldonado, Eduardo; Hamila, Nahiène; Boisse, Philippe; El Azzouzi, Khalid; Tardif, Xavier; Moro, Tanguy; Chatel, Sylvain; Fideu, Paulin
2017-10-01
The work presented here is part of the "FORBANS" project about the Hot Drape Forming (HDF) process consisting of unidirectional prepregs laminates. To ensure a fine comprehension of this process a combination strategy between experiment and numerical analysis is adopted. This paper is focused on the numerical analysis using the finite element method (FEM) with a hyperelastic constitutive law. Each prepreg layer is modelled by shell elements. These elements consider the tension, in-plane shear and bending behaviour of the ply at different temperatures. The contact/friction during the forming process is taken into account using forward increment Lagrange multipliers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, J.V.
The published work on exact penalization is indeed vast. Recently this work has indicated an intimate relationship between exact penalization, Lagrange multipliers, and problem stability or calmness. In the present work we chronicle this development within a simple idealized problem framework, wherein we unify, extend, and refine much of the known theory. In particular, most of the foundations for constrained optimization are developed with the aid of exact penalization techniques. Our approach is highly geometric and is based upon the elementary subdifferential theory for distance functions. It is assumed that the reader is familiar with the theory of convex setsmore » and functions. 54 refs.« less
NASA Astrophysics Data System (ADS)
Ezz-Eldien, S. S.; Doha, E. H.; Bhrawy, A. H.; El-Kalaawy, A. A.; Machado, J. A. T.
2018-04-01
In this paper, we propose a new accurate and robust numerical technique to approximate the solutions of fractional variational problems (FVPs) depending on indefinite integrals with a type of fixed Riemann-Liouville fractional integral. The proposed technique is based on the shifted Chebyshev polynomials as basis functions for the fractional integral operational matrix (FIOM). Together with the Lagrange multiplier method, these problems are then reduced to a system of algebraic equations, which greatly simplifies the solution process. Numerical examples are carried out to confirm the accuracy, efficiency and applicability of the proposed algorithm
Global geometry of non-planar 3-body motions
NASA Astrophysics Data System (ADS)
Salehani, Mahdi Khajeh
2011-12-01
The aim of this paper is to study the global geometry of non-planar 3-body motions in the realms of equivariant Differential Geometry and Geometric Mechanics. This work was intended as an attempt at bringing together these two areas, in which geometric methods play the major role, in the study of the 3-body problem. It is shown that the Euler equations of a three-body system with non-planar motion introduce non-holonomic constraints into the Lagrangian formulation of mechanics. Applying the method of undetermined Lagrange multipliers to study the dynamics of three-body motions reduced to the level of moduli space {bar{M}} subject to the non-holonomic constraints yields the generalized Euler-Lagrange equations of non-planar three-body motions in {bar{M}} . As an application of the derived dynamical equations in the level of {bar{M}} , we completely settle the question posed by A. Wintner in his book [The analytical foundations of Celestial Mechanics, Sections 394-396, 435 and 436. Princeton University Press (1941)] on classifying the constant inclination solutions of the three-body problem.
Kamensky, David; Hsu, Ming-Chen; Schillinger, Dominik; Evans, John A.; Aggarwal, Ankush; Bazilevs, Yuri; Sacks, Michael S.; Hughes, Thomas J. R.
2014-01-01
In this paper, we develop a geometrically flexible technique for computational fluid–structure interaction (FSI). The motivating application is the simulation of tri-leaflet bioprosthetic heart valve function over the complete cardiac cycle. Due to the complex motion of the heart valve leaflets, the fluid domain undergoes large deformations, including changes of topology. The proposed method directly analyzes a spline-based surface representation of the structure by immersing it into a non-boundary-fitted discretization of the surrounding fluid domain. This places our method within an emerging class of computational techniques that aim to capture geometry on non-boundary-fitted analysis meshes. We introduce the term “immersogeometric analysis” to identify this paradigm. The framework starts with an augmented Lagrangian formulation for FSI that enforces kinematic constraints with a combination of Lagrange multipliers and penalty forces. For immersed volumetric objects, we formally eliminate the multiplier field by substituting a fluid–structure interface traction, arriving at Nitsche’s method for enforcing Dirichlet boundary conditions on object surfaces. For immersed thin shell structures modeled geometrically as surfaces, the tractions from opposite sides cancel due to the continuity of the background fluid solution space, leaving a penalty method. Application to a bioprosthetic heart valve, where there is a large pressure jump across the leaflets, reveals shortcomings of the penalty approach. To counteract steep pressure gradients through the structure without the conditioning problems that accompany strong penalty forces, we resurrect the Lagrange multiplier field. Further, since the fluid discretization is not tailored to the structure geometry, there is a significant error in the approximation of pressure discontinuities across the shell. This error becomes especially troublesome in residual-based stabilized methods for incompressible flow, leading to problematic compressibility at practical levels of refinement. We modify existing stabilized methods to improve performance. To evaluate the accuracy of the proposed methods, we test them on benchmark problems and compare the results with those of established boundary-fitted techniques. Finally, we simulate the coupling of the bioprosthetic heart valve and the surrounding blood flow under physiological conditions, demonstrating the effectiveness of the proposed techniques in practical computations. PMID:25541566
NASA Astrophysics Data System (ADS)
Feng, Xueshang; Li, Caixia; Xiang, Changqing; Zhang, Man; Li, HuiChao; Wei, Fengsi
2017-11-01
A second-order path-conservative scheme with a Godunov-type finite-volume method has been implemented to advance the equations of single-fluid solar wind plasma magnetohydrodynamics (MHD) in time. This code operates on the six-component composite grid system in three-dimensional spherical coordinates with hexahedral cells of quadrilateral frustum type. The generalized Osher-Solomon Riemann solver is employed based on a numerical integration of the path-dependent dissipation matrix. For simplicity, the straight line segment path is used, and the path integral is evaluated in a fully numerical way by a high-order numerical Gauss-Legendre quadrature. Besides its very close similarity to Godunov type, the resulting scheme retains the attractive features of the original solver: it is nonlinear, free of entropy-fix, differentiable, and complete, in that each characteristic field results in a different numerical viscosity, due to the full use of the MHD eigenstructure. By using a minmod limiter for spatial oscillation control, the path-conservative scheme is realized for the generalized Lagrange multiplier and the extended generalized Lagrange multiplier formulation of solar wind MHD systems. This new model that is second order in space and time is written in the FORTRAN language with Message Passing Interface parallelization and validated in modeling the time-dependent large-scale structure of the solar corona, driven continuously by Global Oscillation Network Group data. To demonstrate the suitability of our code for the simulation of solar wind, we present selected results from 2009 October 9 to 2009 December 29 show its capability of producing a structured solar corona in agreement with solar coronal observations.
Data-Driven Modeling of Solar Corona by a New 3d Path-Conservative Osher-Solomon MHD Odel
NASA Astrophysics Data System (ADS)
Feng, X. S.; Li, C.
2017-12-01
A second-order path-conservative scheme with Godunov-type finite volume method (FVM) has been implemented to advance the equations of single-fluid solar wind plasma magnetohydrodynamics (MHD) in time. This code operates on the six-component composite grid system in 3D spherical coordinates with hexahedral cells of quadrilateral frustum type. The generalized Osher-Solomon Riemann solver is employed based on a numerical integration of the path-dependentdissipation matrix. For simplicity, the straight line segment path is used and the path-integral is evaluated in a fully numerical way by high-order numerical Gauss-Legendre quadrature. Besides its closest similarity to Godunov, the resulting scheme retains the attractive features of the original solver: it is nonlinear, free of entropy-fix, differentiable and complete in that each characteristic field results in a different numerical viscosity, due to the full use of the MHD eigenstructure. By using a minmod limiter for spatial oscillation control, the pathconservative scheme is realized for the generalized Lagrange multiplier (GLM) and the extended generalized Lagrange multiplier (EGLM) formulation of solar wind MHD systems. This new model of second-order in space and time is written in FORTRAN language with Message Passing Interface (MPI) parallelization, and validated in modeling time-dependent large-scale structure of solar corona, driven continuously by the Global Oscillation Network Group (GONG) data. To demonstrate the suitability of our code for the simulation of solar wind, we present selected results from October 9th, 2009 to December 29th, 2009 , & Year 2008 to show its capability of producing structured solar wind in agreement with the observations.
NASA Astrophysics Data System (ADS)
Mokhtar, Md Asjad; Kamalakar Darpe, Ashish; Gupta, Kshitij
2017-08-01
The ever-increasing need of highly efficient rotating machinery causes reduction in the clearance between rotating and non-rotating parts and increase in the chances of interaction between these parts. The rotor-stator contact, known as rub, has always been recognized as one of the potential causes of rotor system malfunctions and a source of secondary failures. It is one of few causes that influence both lateral and torsional vibrations. In this paper, the rotor stator interaction phenomenon is investigated in the finite element framework using Lagrange multiplier based contact mechanics approach. The stator is modelled as a beam that can respond to axial penetration and lateral friction force during the contact with the rotor. It ensures dynamic stator contact boundary and more realistic contact conditions in contrast to most of the earlier approaches. The rotor bending-torsional mode coupling during contact is considered and the vibration response in bending and torsion are analysed. The effect of parameters such as clearance, friction coefficient and stator stiffness are studied at various operating speeds and it has been found that certain parameter values generate peculiar rub related features. Presence of sub-harmonics in the lateral vibration frequency spectra are prominently observed when the rotor operates near the integer multiple of its lateral critical speed. The spectrum cascade of torsional vibration shows the presence of bending critical speed along with the larger amplitudes of frequencies close to torsional natural frequency of the rotor. When m × 1/n X frequency component of rotational frequency comes closer to the torsional natural frequency, stronger torsional vibration amplitude is noticed in the spectrum cascade. The combined information from the stator vibration and rotor lateral-torsional vibration spectral features is proposed for robust rub identification.
NASA Astrophysics Data System (ADS)
Hakim, Lukmanul; Kubokawa, Junji; Yorino, Naoto; Zoka, Yoshifumi; Sasaki, Yutaka
Advancements have been made towards inclusion of both static and dynamic security into transfer capability calculation. However, to the authors' knowledge, work on considering corrective controls into the calculation has not been reported yet. Therefore, we propose a Total Transfer Capability (TTC) assessment considering transient stability corrective controls. The method is based on the Newton interior point method for nonlinear programming and transfer capability is approached as a maximization of power transfer with both static and transient stability constraints are incorporated into our Transient Stability Constrained Optimal Power Flow (TSCOPF) formulation. An interconnected power system is simulated to be subjected to a severe unbalanced 3-phase 4-line to ground fault and following the fault, generator and load are shed in a pre-defined sequence to mimic actual corrective controls. In a deregulated electricity market, both generator companies and large load customers are encouraged to actively participate in maintaining power system stability as corrective controls upon agreement of compensation for being shed following a disturbance. Implementation of this proposal on the actual power system operation should be carried out through combining it with the existing transient stabilization controller system. Utilization of these corrective controls results in increasing TTC as suggested in our numerical simulation. As Lagrange multipliers can also describe sensitivity of both inequality and equality constraints to the objective function, then selection of which generator or load to be shed can be carried out on the basis of values of Lagrange multipliers of its respective generator's rotor angle stability and active power balance equation. Hence, the proposal in this paper can be utilized by system operator to assess the maximum TTC for specific loads and network conditions.
Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.
2017-06-01
Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.
Minimum Copies of Schrödinger’s Cat State in the Multi-Photon System
Lu, Yiping; Zhao, Qing
2016-01-01
Multi-photon entanglement has been successfully studied by many theoretical and experimental groups. However, as the number of entangled photons increases, some problems are encountered, such as the exponential increase of time necessary to prepare the same number of copies of entangled states in experiment. In this paper, a new scheme is proposed based on the Lagrange multiplier and Feedback, which cuts down the required number of copies of Schrödinger’s Cat state in multi-photon experiment, which is realized with some noise in actual measurements, and still keeps the standard deviation in the error of fidelity unchanged. It reduces about five percent of the measuring time of eight-photon Schrödinger’s Cat state compared with the scheme used in the usual planning of actual measurements, and moreover it guarantees the same low error in fidelity. In addition, we also applied the same approach to the simulation of ten-photon entanglement, and we found that it reduces in priciple about twenty two percent of the required copies of Schrödinger’s Cat state compared with the conventionally used scheme of the uniform distribution; yet the distribution of optimized copies of the ten-photon Schrödinger’s Cat state gives better fidelity estimation than the uniform distribution for the same number of copies of the ten-photon Schrödinger’s Cat state. PMID:27576585
Lagrange constraint neural network for audio varying BSS
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Hsu, Charles C.
2002-03-01
Lagrange Constraint Neural Network (LCNN) is a statistical-mechanical ab-initio model without assuming the artificial neural network (ANN) model at all but derived it from the first principle of Hamilton and Lagrange Methodology: H(S,A)= f(S)- (lambda) C(s,A(x,t)) that incorporates measurement constraint C(S,A(x,t))= (lambda) ([A]S-X)+((lambda) 0-1)((Sigma) isi -1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma) i si log si as the Contrast function of unknown number of independent sources si. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).
Stability of a diffuse linear pinch with axial boundaries
NASA Technical Reports Server (NTRS)
Einaudi, G.; Van Hoven, G.
1981-01-01
A formulation of the stability behavior of a finite-length pinch is presented. A general initial perturbation is expressed as a uniformly convergent sum over a complete discrete k set. A variational calculation is then performed, based on the energy principle, in which the end-boundary conditions appear as constraints. The requisite Lagrange multipliers mutually couple the elemental periodic excitations. The resulting extended form of delta-W still admits a proper second-variation treatment so that the minimization and stability considerations of Newcomb remain applicable. Comparison theorems are discussed as is the relevance of this end-effect model to the stability of solar coronal loops.
Multiple positive normalized solutions for nonlinear Schrödinger systems
NASA Astrophysics Data System (ADS)
Gou, Tianxiang; Jeanjean, Louis
2018-05-01
We consider the existence of multiple positive solutions to the nonlinear Schrödinger systems set on , under the constraint Here are prescribed, , and the frequencies are unknown and will appear as Lagrange multipliers. Two cases are studied, the first when , the second when In both cases, assuming that is sufficiently small, we prove the existence of two positive solutions. The first one is a local minimizer for which we establish the compactness of the minimizing sequences and also discuss the orbital stability of the associated standing waves. The second solution is obtained through a constrained mountain pass and a constrained linking respectively.
Finite Nilpotent BRST Transformations in Hamiltonian Formulation
NASA Astrophysics Data System (ADS)
Rai, Sumit Kumar; Mandal, Bhabani Prasad
2013-10-01
We consider the finite field dependent BRST (FFBRST) transformations in the context of Hamiltonian formulation using Batalin-Fradkin-Vilkovisky method. The non-trivial Jacobian of such transformations is calculated in extended phase space. The contribution from Jacobian can be written as exponential of some local functional of fields which can be added to the effective Hamiltonian of the system. Thus, FFBRST in Hamiltonian formulation with extended phase space also connects different effective theories. We establish this result with the help of two explicit examples. We also show that the FFBRST transformations is similar to the canonical transformations in the sector of Lagrange multiplier and its corresponding momenta.
NASA Astrophysics Data System (ADS)
Xu, Ding; Li, Qun
2017-01-01
This paper addresses the power allocation problem for cognitive radio (CR) based on hybrid-automatic-repeat-request (HARQ) with chase combining (CC) in Nakagamimslow fading channels. We assume that, instead of the perfect instantaneous channel state information (CSI), only the statistical CSI is available at the secondary user (SU) transmitter. The aim is to minimize the SU outage probability under the primary user (PU) interference outage constraint. Using the Lagrange multiplier method, an iterative and recursive algorithm is derived to obtain the optimal power allocation for each transmission round. Extensive numerical results are presented to illustrate the performance of the proposed algorithm.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
Acoustic response of a rectangular levitator with orifices
NASA Technical Reports Server (NTRS)
El-Raheb, Michael; Wagner, Paul
1990-01-01
The acoustic response of a rectangular cavity to speaker-generated excitation through waveguides terminating at orifices in the cavity walls is analyzed. To find the effects of orifices, acoustic pressure is expressed by eigenfunctions satisfying Neumann boundary conditions as well as by those satisfying Dirichlet ones. Some of the excess unknowns can be eliminated by point constraints set over the boundary, by appeal to Lagrange undetermined multipliers. The resulting transfer matrix must be further reduced by partial condensation to the order of a matrix describing unmixed boundary conditions. If the cavity is subjected to an axial temperature dependence, the transfer matrix is determined numerically.
Cellular Analysis of Boltzmann Most Probable Ideal Gas Statistics
NASA Astrophysics Data System (ADS)
Cahill, Michael E.
2018-04-01
Exact treatment of Boltzmann's Most Probable Statistics for an Ideal Gas of Identical Mass Particles having Translational Kinetic Energy gives a Distribution Law for Velocity Phase Space Cell j which relates the Particle Energy and the Particle Population according toB e(j) = A - Ψ(n(j) + 1)where A & B are the Lagrange Multipliers and Ψ is the Digamma Function defined byΨ(x + 1) = d/dx ln(x!)A useful sufficiently accurate approximation for Ψ is given byΨ(x +1) ≈ ln(e-γ + x)where γ is the Euler constant (≈.5772156649) & so the above distribution equation is approximatelyB e(j) = A - ln(e-γ + n(j))which can be inverted to solve for n(j) givingn(j) = (eB (eH - e(j)) - 1) e-γwhere B eH = A + γ& where B eH is a unitless particle energy which replaces the parameter A. The 2 approximate distribution equations imply that eH is the highest particle energy and the highest particle population isnH = (eB eH - 1) e-γwhich is due to the facts that population becomes negative if e(j) > eH and kinetic energy becomes negative if n(j) > nH.An explicit construction of Cells in Velocity Space which are equal in volume and homogeneous for almost all cells is shown to be useful in the analysis.Plots for sample distribution properties using e(j) as the independent variable are presented.
Chen, Gang; Song, Yongduan; Lewis, Frank L
2016-05-03
This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Finite elements based on consistently assumed stresses and displacements
NASA Technical Reports Server (NTRS)
Pian, T. H. H.
1985-01-01
Finite element stiffness matrices are derived using an extended Hellinger-Reissner principle in which internal displacements are added to serve as Lagrange multipliers to introduce the equilibrium constraint in each element. In a consistent formulation the assumed stresses are initially unconstrained and complete polynomials and the total displacements are also complete such that the corresponding strains are complete in the same order as the stresses. Several examples indicate that resulting properties for elements constructed by this consistent formulation are ideal and are less sensitive to distortions of element geometries. The method has been used to find the optimal stress terms for plane elements, 3-D solids, axisymmetric solids, and plate bending elements.
A sequential quadratic programming algorithm using an incomplete solution of the subproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, W.; Prieto, F.J.
1993-05-01
We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less
Contribution to the optimal shape design of two-dimensional internal flows with embedded shocks
NASA Technical Reports Server (NTRS)
Iollo, Angelo; Salas, Manuel D.
1995-01-01
We explore the practicability of optimal shape design for flows modeled by the Euler equations. We define a functional whose minimum represents the optimality condition. The gradient of the functional with respect to the geometry is calculated with the Lagrange multipliers, which are determined by solving a co-state equation. The optimization problem is then examined by comparing the performance of several gradient-based optimization algorithms. In this formulation, the flow field can be computed to an arbitrary order of accuracy. Finally, some results for internal flows with embedded shocks are presented, including a case for which the solution to the inverse problem does not belong to the design space.
Unified theory for inhomogeneous thermoelectric generators and coolers including multistage devices.
Gerstenmaier, York Christian; Wachutka, Gerhard
2012-11-01
A novel generalized Lagrange multiplier method for functional optimization with inclusion of subsidiary conditions is presented and applied to the optimization of material distributions in thermoelectric converters. Multistaged devices are considered within the same formalism by inclusion of position-dependent electric current in the legs leading to a modified thermoelectric equation. Previous analytical solutions for maximized efficiencies for generators and coolers obtained by Sherman [J. Appl. Phys. 31, 1 (1960)], Snyder [Phys. Rev. B 86, 045202 (2012)], and Seifert et al. [Phys. Status Solidi A 207, 760 (2010)] by a method of local optimization of reduced efficiencies are recovered by independent proof. The outstanding maximization problems for generated electric power and cooling power can be solved swiftly numerically by solution of a differential equation-system obtained within the new formalism. As far as suitable materials are available, the inhomogeneous TE converters can have increased performance by use of purely temperature-dependent material properties in the thermoelectric legs or by use of purely spatial variation of material properties or by a combination of both. It turns out that the optimization domain is larger for the second kind of device which can, thus, outperform the first kind of device.
Tommasino, Paolo; Campolo, Domenico
2017-02-03
In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.
NASA Astrophysics Data System (ADS)
Joshi, Vaibhav; Jaiman, Rajeev K.
2018-05-01
We present a positivity preserving variational scheme for the phase-field modeling of incompressible two-phase flows with high density ratio. The variational finite element technique relies on the Allen-Cahn phase-field equation for capturing the phase interface on a fixed Eulerian mesh with mass conservative and energy-stable discretization. The mass conservation is achieved by enforcing a Lagrange multiplier which has both temporal and spatial dependence on the underlying solution of the phase-field equation. To make the scheme energy-stable in a variational sense, we discretize the spatial part of the Lagrange multiplier in the phase-field equation by the mid-point approximation. The proposed variational technique is designed to reduce the spurious and unphysical oscillations in the solution while maintaining the second-order accuracy of both spatial and temporal discretizations. We integrate the Allen-Cahn phase-field equation with the incompressible Navier-Stokes equations for modeling a broad range of two-phase flow and fluid-fluid interface problems. The coupling of the implicit discretizations corresponding to the phase-field and the incompressible flow equations is achieved via nonlinear partitioned iterative procedure. Comparison of results between the standard linear stabilized finite element method and the present variational formulation shows a remarkable reduction of oscillations in the solution while retaining the boundedness of the phase-indicator field. We perform a standalone test to verify the accuracy and stability of the Allen-Cahn two-phase solver. We examine the convergence and accuracy properties of the coupled phase-field solver through the standard benchmarks of the Laplace-Young law and a sloshing tank problem. Two- and three-dimensional dam break problems are simulated to assess the capability of the phase-field solver for complex air-water interfaces involving topological changes on unstructured meshes. Finally, we demonstrate the phase-field solver for a practical offshore engineering application of wave-structure interaction.
Meng, Fan; Yang, Xiaomei; Zhou, Chenghu
2014-01-01
This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the -norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image. PMID:25248103
NASA Astrophysics Data System (ADS)
Lovely, P. J.; Mutlu, O.; Pollard, D. D.
2007-12-01
Cohesive end-zones (CEZs) are regions of increased frictional strength and/or cohesion near the peripheries of faults that cause slip distributions to taper toward the fault-tip. Laboratory results, field observations, and theoretical models suggest an important role for CEZs in small-scale fractures and faults; however, their role in crustal-scale faulting and associated large earthquakes is less thoroughly understood. We present a numerical study of the potential role of CEZs on slip distributions in large, multi-segmented, strike-slip earthquake ruptures including the 1992 Landers Earthquake (Mw 7.2) and 1999 Hector Mine Earthquake (Mw 7.1). Displacement discontinuity is calculated using a quasi-static, 2D plane-strain boundary element (BEM) code for a homogeneous, isotropic, linear-elastic material. Friction is implemented by enforcing principles of complementarity. Model results with and without CEZs are compared with slip distributions measured by combined inversion of geodetic, strong ground motion, and teleseismic data. Stepwise and linear distributions of increasing frictional strength within CEZs are considered. The incorporation of CEZs in our model enables an improved match to slip distributions measured by inversion, suggesting that CEZs play a role in governing slip in large, strike-slip earthquakes. Additionally, we present a parametric study highlighting the very great sensitivity of modeled slip magnitude to small variations of the coefficient of friction. This result suggests that, provided a sufficiently well-constrained stress tensor and elastic moduli for the surrounding rock, relatively simple models could provide precise estimates of the magnitude of frictional strength. These results are verified by comparison with geometrically comparable finite element (FEM) models using the commercial code ABAQUS. In FEM models, friction is implemented by use of both Lagrange multipliers and penalty methods.
A BRST formulation for the conic constrained particle
NASA Astrophysics Data System (ADS)
Barbosa, Gabriel D.; Thibes, Ronaldo
2018-04-01
We describe the gauge invariant BRST formulation of a particle constrained to move in a general conic. The model considered constitutes an explicit example of an originally second-class system which can be quantized within the BRST framework. We initially impose the conic constraint by means of a Lagrange multiplier leading to a consistent second-class system which generalizes previous models studied in the literature. After calculating the constraint structure and the corresponding Dirac brackets, we introduce a suitable first-order Lagrangian, the resulting modified system is then shown to be gauge invariant. We proceed to the extended phase space introducing fermionic ghost variables, exhibiting the BRST symmetry transformations and writing the Green’s function generating functional for the BRST quantized model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez, Bienvenido; Novo, Vicente
We provide second-order necessary and sufficient conditions for a point to be an efficient element of a set with respect to a cone in a normed space, so that there is only a small gap between necessary and sufficient conditions. To this aim, we use the common second-order tangent set and the asymptotic second-order cone utilized by Penot. As an application we establish second-order necessary conditions for a point to be a solution of a vector optimization problem with an arbitrary feasible set and a twice Frechet differentiable objective function between two normed spaces. We also establish second-order sufficient conditionsmore » when the initial space is finite-dimensional so that there is no gap with necessary conditions. Lagrange multiplier rules are also given.« less
Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.
Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng
2011-10-01
This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.
On partially massless theory in 3 dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Sergei; Laboratoire Charles Coulomb UMR 5221, CNRS, Place Eugène Bataillon, F-34095, Montpellier; Deffayet, Cédric
2015-03-24
We analyze the first-order formulation of the ghost-free bigravity model in three-dimensions known as zwei-dreibein gravity. For a special choice of parameters, it was argued to have an additional gauge symmetry and give rise to a partially massless theory. We provide a thorough canonical analysis and identify that whether the theory becomes partially massless depends on the form of the stability condition of the secondary constraint responsible for the absence of the ghost. Generically, it is found to be an equation for a Lagrange multiplier implying that partially massless zwei-dreibein gravity does not exist. However, for special backgrounds this conditionmore » is identically satisfied leading to the presence of additional symmetries, which however disappear at quadratic order in perturbations.« less
On partially massless theory in 3 dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Sergei; Deffayet, Cédric, E-mail: salexand@univ-montp2.fr, E-mail: deffayet@iap.fr
2015-03-01
We analyze the first-order formulation of the ghost-free bigravity model in three-dimensions known as zwei-dreibein gravity. For a special choice of parameters, it was argued to have an additional gauge symmetry and give rise to a partially massless theory. We provide a thorough canonical analysis and identify that whether the theory becomes partially massless depends on the form of the stability condition of the secondary constraint responsible for the absence of the ghost. Generically, it is found to be an equation for a Lagrange multiplier implying that partially massless zwei-dreibein gravity does not exist. However, for special backgrounds this conditionmore » is identically satisfied leading to the presence of additional symmetries, which however disappear at quadratic order in perturbations.« less
Random Matrix Approach for Primal-Dual Portfolio Optimization Problems
NASA Astrophysics Data System (ADS)
Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi
2017-12-01
In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.
Convex Accelerated Maximum Entropy Reconstruction
Worley, Bradley
2016-01-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476
NASA Astrophysics Data System (ADS)
Voloshinov, V. V.
2018-03-01
In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nojiri, S.; Odintsov, S.D.; Oikonomou, V.K., E-mail: nojiri@gravity.phys.nagoya-u.ac.jp, E-mail: odintsov@ieec.uab.es, E-mail: v.k.oikonomou1979@gmail.com
2016-05-01
We extend the formalism of the Einstein-Hilbert unimodular gravity in the context of modified F ( R ) gravity. After appropriately modifying the Friedmann-Robertson-Walker metric in a way that it becomes compatible to the unimodular condition of having a constant metric determinant, we derive the equations of motion of the unimodular F ( R ) gravity by using the metric formalism of modified gravity with Lagrange multiplier constraint. The resulting equations are studied in frames of reconstruction method, which enables us to realize various cosmological scenarios, which was impossible to realize in the standard Einstein-Hilbert unimodular gravity. Several unimodular Fmore » ( R ) inflationary scenarios are presented, and in some cases, concordance with Planck and BICEP2 observational data can be achieved.« less
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Dodin, I. Y.; Zhmoginov, A. I.; Ruiz, D. E.
2017-02-24
Applications of variational methods are typically restricted to conservative systems. Some extensions to dissipative systems have been reported too but require ad hoc techniques such as the artificial doubling of the dynamical variables. We propose a different approach. Here, we show that for a broad class of dissipative systems of practical interest, variational principles can be formulated using constant Lagrange multipliers and Lagrangians nonlocal in time, which allow treating reversible and irreversible dynamics on the same footing. A general variational theory of linear dispersion is formulated as an example. Particularly, we present a variational formulation for linear geometrical optics inmore » a general dissipative medium, which is allowed to be nonstationary, inhomogeneous, anisotropic, and exhibit both temporal and spatial dispersion simultaneously.« less
Maximizing and minimizing investment concentration with constraints of budget and investment risk
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-01-01
In this paper, as a first step in examining the properties of a feasible portfolio subset that is characterized by budget and risk constraints, we assess the maximum and minimum of the investment concentration using replica analysis. To do this, we apply an analytical approach of statistical mechanics. We note that the optimization problem considered in this paper is the dual problem of the portfolio optimization problem discussed in the literature, and we verify that these optimal solutions are also dual. We also present numerical experiments, in which we use the method of steepest descent that is based on Lagrange's method of undetermined multipliers, and we compare the numerical results to those obtained by replica analysis in order to assess the effectiveness of our proposed approach.
A robust direct-integration method for rotorcraft maneuver and periodic response
NASA Technical Reports Server (NTRS)
Panda, Brahmananda
1992-01-01
The Newmark-Beta method and the Newton-Raphson iteration scheme are combined to develop a direct-integration method for evaluating the maneuver and periodic-response expressions for rotorcraft. The method requires the generation of Jacobians and includes higher derivatives in the formulation of the geometric stiffness matrix to enhance the convergence of the system. The method leads to effective convergence with nonlinear structural dynamics and aerodynamic terms. Singularities in the matrices can be addressed with the method as they arise from a Lagrange multiplier approach for coupling equations with nonlinear constraints. The method is also shown to be general enough to handle singularities from quasisteady control-system models. The method is shown to be more general and robust than the similar 2GCHAS method for analyzing rotorcraft dynamics.
NASA Astrophysics Data System (ADS)
Maksimyuk, V. A.; Storozhuk, E. A.; Chernyshenko, I. S.
2012-11-01
Variational finite-difference methods of solving linear and nonlinear problems for thin and nonthin shells (plates) made of homogeneous isotropic (metallic) and orthotropic (composite) materials are analyzed and their classification principles and structure are discussed. Scalar and vector variational finite-difference methods that implement the Kirchhoff-Love hypotheses analytically or algorithmically using Lagrange multipliers are outlined. The Timoshenko hypotheses are implemented in a traditional way, i.e., analytically. The stress-strain state of metallic and composite shells of complex geometry is analyzed numerically. The numerical results are presented in the form of graphs and tables and used to assess the efficiency of using the variational finite-difference methods to solve linear and nonlinear problems of the statics of shells (plates)
NASA Technical Reports Server (NTRS)
Darbro, W.
1978-01-01
In an experiment in space it was found that when a cubical frame was slowly withdrawn from a soap solution, the wire frame retained practically a full cube of liquid. Removed from the frame (by shaking), the faces of the cube became progressively more concave, until adjacent faces became tangential. In the present paper a mathematical model describing the shape a liquid takes due to its surface tension while suspended on a wire frame in zero-g is solved by use of Lagrange multipliers. It is shown how the configuration of soap films so bounded is dependent upon the volume of liquid trapped in the films. A special case of the solution is a soap film naturally formed on a cubical wire frame.
Applications of quantum entropy to statistics
NASA Astrophysics Data System (ADS)
Silver, R. N.; Martz, H. F.
This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.
Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture
NASA Astrophysics Data System (ADS)
Meng, Chunfang
2017-03-01
We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.
Discrimination of coherent features in turbulent boundary layers by the entropy method
NASA Technical Reports Server (NTRS)
Corke, T. C.; Guezennec, Y. G.
1984-01-01
Entropy in information theory is defined as the expected or mean value of the measure of the amount of self-information contained in the ith point of a distribution series x sub i, based on its probability of occurrence p(x sub i). If p(x sub i) is the probability of the ith state of the system in probability space, then the entropy, E(X) = - sigma p(x sub i) logp (x sub i), is a measure of the disorder in the system. Based on this concept, a method was devised which sought to minimize the entropy in a time series in order to construct the signature of the most coherent motions. The constrained minimization was performed using a Lagrange multiplier approach which resulted in the solution of a simultaneous set of non-linear coupled equations to obtain the coherent time series. The application of the method to space-time data taken by a rake of sensors in the near-wall region of a turbulent boundary layer was presented. The results yielded coherent velocity motions made up of locally decelerated or accelerated fluid having a streamwise scale of approximately 100 nu/u(tau), which is in qualitative agreement with the results from other less objective discrimination methods.
Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic
NASA Astrophysics Data System (ADS)
Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat
2017-03-01
The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.
NASA Astrophysics Data System (ADS)
Nagurney, Anna; Besik, Deniz; Yu, Min
2018-04-01
In this paper, we construct a competitive food supply chain network model in which the profit-maximizing producers decide not only as to the volume of fresh produce produced and distributed using various supply chain network pathways, but they also decide, with the associated costs, on the initial quality of the fresh produce. Consumers, in turn, respond to the various producers' product outputs through the prices that they are willing to pay, given also the average quality associated with each producer or brand at the retail outlets. The quality of the fresh produce is captured through explicit formulae that incorporate time, temperature, and other link characteristics with links associated with processing, shipment, storage, etc. Capacities on links are also incorporated as well as upper bounds on the initial product quality of the firms at their production/harvesting sites. The governing concept of the competitive supply chain network model is that of Nash Equilibrium, for which alternative variational inequality formulations are derived, along with existence results. An algorithmic procedure, which can be interpreted as a discrete-time tatonnement process, is then described and applied to compute the equilibrium produce flow patterns and accompanying link Lagrange multipliers in a realistic case study, focusing on peaches, which includes disruptions.
Lagrange multiplier and Wess-Zumino variable as extra dimensions in the torus universe
NASA Astrophysics Data System (ADS)
Nejad, Salman Abarghouei; Dehghani, Mehdi; Monemzadeh, Majid
2018-01-01
We study the effect of the simplest geometry which is imposed via the topology of the universe by gauging non-relativistic particle model on torus and 3-torus with the help of symplectic formalism of constrained systems. Also, we obtain generators of gauge transformations for gauged models. Extracting corresponding Poisson structure of existed constraints, we show the effect of the shape of the universe on canonical structure of phase-spaces of models and suggest some phenomenology to prove the topology of the universe and probable non-commutative structure of the space. In addition, we show that the number of extra dimensions in the phase-spaces of gauged embedded models are exactly two. Moreover, in classical form, we talk over modification of Newton's second law in order to study the origin of the terms appeared in the gauged theory.
A finite-temperature Hartree-Fock code for shell-model Hamiltonians
NASA Astrophysics Data System (ADS)
Bertsch, G. F.; Mehlhaff, J. M.
2016-10-01
The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.
De Sitter and scaling solutions in a higher-order modified teleparallel theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paliathanasis, Andronikos, E-mail: anpaliat@phys.uoa.gr
The existence and the stability conditions for some exact relativistic solutions of special interest are studied in a higher-order modified teleparallel gravitational theory. The theory with the use of a Lagrange multiplier is equivalent with that of General Relativity with a minimally coupled noncanonical field. The conditions for the existence of de Sitter solutions and ideal gas solutions in the case of vacuum are studied as also the stability criteria. Furthermore, in the presence of matter the behaviour of scaling solutions is given. Finally, we discuss the degrees of freedom of the field equations and we reduce the field equationsmore » in an algebraic equation, where in order to demonstrate our result we show how this noncanonical scalar field can reproduce the Hubble function of Λ-cosmology.« less
Dynamics of a New 5D Hyperchaotic System of Lorenz Type
NASA Astrophysics Data System (ADS)
Zhang, Fuchen; Chen, Rui; Wang, Xingyuan; Chen, Xiusu; Mu, Chunlai; Liao, Xiaofeng
Ultimate boundedness of chaotic dynamical systems is one of the fundamental concepts in dynamical systems, which plays an important role in investigating the stability of the equilibrium, estimating the Lyapunov dimension of attractors and the Hausdorff dimension of attractors, the existence of periodic solutions, chaos control, chaos synchronization. However, it is often difficult to obtain the bounds of the hyperchaotic systems due to the complex algebraic structure of the hyperchaotic systems. This paper has investigated the boundedness of solutions of a nonlinear hyperchaotic system. We have obtained the global exponential attractive set and the ultimate bound set for this system. To obtain the ellipsoidal ultimate bound, the ultimate bound of the proposed system is theoretically estimated using Lagrange multiplier method, Lyapunov stability theory and optimization theory. To show the ultimate bound region, numerical simulations are provided.
Ma, Xu; Cheng, Yongmei; Hao, Shuai
2016-12-10
Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.
Phase-field modeling of isothermal quasi-incompressible multicomponent liquids
NASA Astrophysics Data System (ADS)
Tóth, Gyula I.
2016-09-01
In this paper general dynamic equations describing the time evolution of isothermal quasi-incompressible multicomponent liquids are derived in the framework of the classical Ginzburg-Landau theory of first order phase transformations. Based on the fundamental equations of continuum mechanics, a general convection-diffusion dynamics is set up first for compressible liquids. The constitutive relations for the diffusion fluxes and the capillary stress are determined in the framework of gradient theories. Next the general definition of incompressibility is given, which is taken into account in the derivation by using the Lagrange multiplier method. To validate the theory, the dynamic equations are solved numerically for the quaternary quasi-incompressible Cahn-Hilliard system. It is demonstrated that variable density (i) has no effect on equilibrium (in case of a suitably constructed free energy functional) and (ii) can influence nonequilibrium pattern formation significantly.
Hamiltonian dynamics of extended objects
NASA Astrophysics Data System (ADS)
Capovilla, R.; Guven, J.; Rojas, E.
2004-12-01
We consider relativistic extended objects described by a reparametrization-invariant local action that depends on the extrinsic curvature of the worldvolume swept out by the object as it evolves. We provide a Hamiltonian formulation of the dynamics of such higher derivative models which is motivated by the ADM formulation of general relativity. The canonical momenta are identified by looking at boundary behaviour under small deformations of the action; the relationship between the momentum conjugate to the embedding functions and the conserved momentum density is established. The canonical Hamiltonian is constructed explicitly; the constraints on the phase space, both primary and secondary, are identified and the role they play in the theory is described. The multipliers implementing the primary constraints are identified in terms of the ADM lapse and shift variables and Hamilton's equations are shown to be consistent with the Euler Lagrange equations.
New Variational Formulations of Hybrid Stress Elements
NASA Technical Reports Server (NTRS)
Pian, T. H. H.; Sumihara, K.; Kang, D.
1984-01-01
In the variational formulations of finite elements by the Hu-Washizu and Hellinger-Reissner principles the stress equilibrium condition is maintained by the inclusion of internal displacements which function as the Lagrange multipliers for the constraints. These versions permit the use of natural coordinates and the relaxation of the equilibrium conditions and render considerable improvements in the assumed stress hybrid elements. These include the derivation of invariant hybrid elements which possess the ideal qualities such as minimum sensitivity to geometric distortions, minimum number of independent stress parameters, rank sufficient, and ability to represent constant strain states and bending moments. Another application is the formulation of semiLoof thin shell elements which can yield excellent results for many severe test cases because the rigid body nodes, the momentless membrane strains, and the inextensional bending modes are all represented.
Efficient algorithms and implementations of entropy-based moment closures for rarefied gases
NASA Astrophysics Data System (ADS)
Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel
2017-07-01
We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.
Mean-Field Description of Ionic Size Effects with Non-Uniform Ionic Sizes: A Numerical Approach
Zhou, Shenggao; Wang, Zhongming; Li, Bo
2013-01-01
Ionic size effects are significant in many biological systems. Mean-field descriptions of such effects can be efficient but also challenging. When ionic sizes are different, explicit formulas in such descriptions are not available for the dependence of the ionic concentrations on the electrostatic potential, i.e., there is no explicit, Boltzmann type distributions. This work begins with a variational formulation of the continuum electrostatics of an ionic solution with such non-uniform ionic sizes as well as multiple ionic valences. An augmented Lagrange multiplier method is then developed and implemented to numerically solve the underlying constrained optimization problem. The method is shown to be accurate and efficient, and is applied to ionic systems with non-uniform ionic sizes such as the sodium chloride solution. Extensive numerical tests demonstrate that the mean-field model and numerical method capture qualitatively some significant ionic size effects, particularly those for multivalent ionic solutions, such as the stratification of multivalent counterions near a charged surface. The ionic valence-to-volume ratio is found to be the key physical parameter in the stratification of concentrations. All these are not well described by the classical Poisson–Boltzmann theory, or the generalized Poisson–Boltzmann theory that treats uniform ionic sizes. Finally, various issues such as the close packing, limitation of the continuum model, and generalization of this work to molecular solvation are discussed. PMID:21929014
An entropic framework for modeling economies
NASA Astrophysics Data System (ADS)
Caticha, Ariel; Golan, Amos
2014-08-01
We develop an information-theoretic framework for economic modeling. This framework is based on principles of entropic inference that are designed for reasoning on the basis of incomplete information. We take the point of view of an external observer who has access to limited information about broad macroscopic economic features. We view this framework as complementary to more traditional methods. The economy is modeled as a collection of agents about whom we make no assumptions of rationality (in the sense of maximizing utility or profit). States of statistical equilibrium are introduced as those macrostates that maximize entropy subject to the relevant information codified into constraints. The basic assumption is that this information refers to supply and demand and is expressed in the form of the expected values of certain quantities (such as inputs, resources, goods, production functions, utility functions and budgets). The notion of economic entropy is introduced. It provides a measure of the uniformity of the distribution of goods and resources. It captures both the welfare state of the economy as well as the characteristics of the market (say, monopolistic, concentrated or competitive). Prices, which turn out to be the Lagrange multipliers, are endogenously generated by the economy. Further studies include the equilibrium between two economies and the conditions for stability. As an example, the case of the nonlinear economy that arises from linear production and utility functions is treated in some detail.
Numerical and experimental approaches to simulate soil clogging in porous media
NASA Astrophysics Data System (ADS)
Kanarska, Yuliya; LLNL Team
2012-11-01
Failure of a dam by erosion ranks among the most serious accidents in civil engineering. The best way to prevent internal erosion is using adequate granular filters in the transition areas where important hydraulic gradients can appear. In case of cracking and erosion, if the filter is capable of retaining the eroded particles, the crack will seal and the dam safety will be ensured. A finite element numerical solution of the Navier-Stokes equations for fluid flow together with Lagrange multiplier technique for solid particles was applied to the simulation of soil filtration. The numerical approach was validated through comparison of numerical simulations with the experimental results of base soil particle clogging in the filter layers performed at ERDC. The numerical simulation correctly predicted flow and pressure decay due to particle clogging. The base soil particle distribution was almost identical to those measured in the laboratory experiment. To get more precise understanding of the soil transport in granular filters we investigated sensitivity of particle clogging mechanisms to various aspects such as particle size ration, the amplitude of hydraulic gradient, particle concentration and contact properties. By averaging the results derived from the grain-scale simulations, we investigated how those factors affect the semi-empirical multiphase model parameters in the large-scale simulation tool. The Department of Homeland Security Science and Technology Directorate provided funding for this research.
Information asymmetry and deception.
Clots-Figueras, Irma; Hernán-González, Roberto; Kujal, Praveen
2015-01-01
Situations such as an entrepreneur overstating a project's value, or a superior choosing to under or overstate the gains from a project to a subordinate are common and may result in acts of deception. In this paper we modify the standard investment game in the economics literature to study the nature of deception. In this game a trustor (investor) can send a given amount of money to a trustee (or investee). The amount received is multiplied by a certain amount, k, and the investee then decides on how to divide the total amount received. In our modified game the information on the multiplier, k, is known only to the investee and she can send a non-binding message to the investor regarding its value. We find that 66% of the investees send false messages with both under and over, statement being observed. Investors are naive and almost half of them believe the message received. We find greater lying when the distribution of the multiplier is unknown by the investors than when they know the distribution. Further, messages make beliefs about the multiplier more pessimistic when the investors know the distribution of the multiplier, while the opposite is true when they do not know the distribution.
Information asymmetry and deception
Clots-Figueras, Irma; Hernán-González, Roberto; Kujal, Praveen
2015-01-01
Situations such as an entrepreneur overstating a project's value, or a superior choosing to under or overstate the gains from a project to a subordinate are common and may result in acts of deception. In this paper we modify the standard investment game in the economics literature to study the nature of deception. In this game a trustor (investor) can send a given amount of money to a trustee (or investee). The amount received is multiplied by a certain amount, k, and the investee then decides on how to divide the total amount received. In our modified game the information on the multiplier, k, is known only to the investee and she can send a non-binding message to the investor regarding its value. We find that 66% of the investees send false messages with both under and over, statement being observed. Investors are naive and almost half of them believe the message received. We find greater lying when the distribution of the multiplier is unknown by the investors than when they know the distribution. Further, messages make beliefs about the multiplier more pessimistic when the investors know the distribution of the multiplier, while the opposite is true when they do not know the distribution. PMID:26257615
Fiber-reinforced materials: finite elements for the treatment of the inextensibility constraint
NASA Astrophysics Data System (ADS)
Auricchio, Ferdinando; Scalet, Giulia; Wriggers, Peter
2017-12-01
The present paper proposes a numerical framework for the analysis of problems involving fiber-reinforced anisotropic materials. Specifically, isotropic linear elastic solids, reinforced by a single family of inextensible fibers, are considered. The kinematic constraint equation of inextensibility in the fiber direction leads to the presence of an undetermined fiber stress in the constitutive equations. To avoid locking-phenomena in the numerical solution due to the presence of the constraint, mixed finite elements based on the Lagrange multiplier, perturbed Lagrangian, and penalty method are proposed. Several boundary-value problems under plane strain conditions are solved and numerical results are compared to analytical solutions, whenever the derivation is possible. The performed simulations allow to assess the performance of the proposed finite elements and to discuss several features of the developed formulations concerning the effective approximation for the displacement and fiber stress fields, mesh convergence, and sensitivity to penalty parameters.
Dynamic mortar finite element method for modeling of shear rupture on frictional rough surfaces
NASA Astrophysics Data System (ADS)
Tal, Yuval; Hager, Bradford H.
2017-09-01
This paper presents a mortar-based finite element formulation for modeling the dynamics of shear rupture on rough interfaces governed by slip-weakening and rate and state (RS) friction laws, focusing on the dynamics of earthquakes. The method utilizes the dual Lagrange multipliers and the primal-dual active set strategy concepts, together with a consistent discretization and linearization of the contact forces and constraints, and the friction laws to obtain a semi-smooth Newton method. The discretization of the RS friction law involves a procedure to condense out the state variables, thus eliminating the addition of another set of unknowns into the system. Several numerical examples of shear rupture on frictional rough interfaces demonstrate the efficiency of the method and examine the effects of the different time discretization schemes on the convergence, energy conservation, and the time evolution of shear traction and slip rate.
Worst case estimation of homology design by convex analysis
NASA Technical Reports Server (NTRS)
Yoshikawa, N.; Elishakoff, Isaac; Nakagiri, S.
1998-01-01
The methodology of homology design is investigated for optimum design of advanced structures. for which the achievement of delicate tasks by the aid of active control system is demanded. The proposed formulation of homology design, based on the finite element sensitivity analysis, necessarily requires the specification of external loadings. The formulation to evaluate the worst case for homology design caused by uncertain fluctuation of loadings is presented by means of the convex model of uncertainty, in which uncertainty variables are assigned to discretized nodal forces and are confined within a conceivable convex hull given as a hyperellipse. The worst case of the distortion from objective homologous deformation is estimated by the Lagrange multiplier method searching the point to maximize the error index on the boundary of the convex hull. The validity of the proposed method is demonstrated in a numerical example using the eleven-bar truss structure.
Airfoil Design and Optimization by the One-Shot Method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1995-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Development and application of a unified balancing approach with multiple constraints
NASA Technical Reports Server (NTRS)
Zorzi, E. S.; Lee, C. C.; Giordano, J. C.
1985-01-01
The development of a general analytic approach to constrained balancing that is consistent with past influence coefficient methods is described. The approach uses Lagrange multipliers to impose orbit and/or weight constraints; these constraints are combined with the least squares minimization process to provide a set of coupled equations that result in a single solution form for determining correction weights. Proper selection of constraints results in the capability to: (1) balance higher speeds without disturbing previously balanced modes, thru the use of modal trial weight sets; (2) balance off-critical speeds; and (3) balance decoupled modes by use of a single balance plane. If no constraints are imposed, this solution form reduces to the general weighted least squares influence coefficient method. A test facility used to examine the use of the general constrained balancing procedure and application of modal trial weight ratios is also described.
NASA Astrophysics Data System (ADS)
Schilder, J.; Ellenbroek, M.; de Boer, A.
2017-12-01
In this work, the floating frame of reference formulation is used to create a flexible multibody model of slender offshore structures such as pipelines and risers. It is shown that due to the chain-like topology of the considered structures, the equation of motion can be expressed in terms of absolute interface coordinates. In the presented form, kinematic constraint equations are satisfied explicitly and the Lagrange multipliers are eliminated from the equations. Hence, the structures can be conveniently coupled to finite element or multibody models of for example seabed and vessel. The chain-like topology enables the efficient use of recursive solution procedures for both transient dynamic analysis and equilibrium analysis. For this, the transfer matrix method is used. In order to improve the convergence of the equilibrium analysis, the analytical solution of an ideal catenary is used as an initial configuration, reducing the number of required iterations.
2D data-space cross-gradient joint inversion of MT, gravity and magnetic data
NASA Astrophysics Data System (ADS)
Pak, Yong-Chol; Li, Tonglin; Kim, Gang-Sop
2017-08-01
We have developed a data-space multiple cross-gradient joint inversion algorithm, and validated it through synthetic tests and applied it to magnetotelluric (MT), gravity and magnetic datasets acquired along a 95 km profile in Benxi-Ji'an area of northeastern China. To begin, we discuss a generalized cross-gradient joint inversion for multiple datasets and model parameters sets, and formulate it in data space. The Lagrange multiplier required for the structural coupling in the data-space method is determined using an iterative solver to avoid calculation of the inverse matrix in solving the large system of equations. Next, using model-space and data-space methods, we inverted the synthetic data and field data. Based on our result, the joint inversion in data-space not only delineates geological bodies more clearly than the separate inversion, but also yields nearly equal results with the one in model-space while consuming much less memory.
Evolutionary branching under multi-dimensional evolutionary constraints.
Ito, Hiroshi; Sasaki, Akira
2016-10-21
The fitness of an existing phenotype and of a potential mutant should generally depend on the frequencies of other existing phenotypes. Adaptive evolution driven by such frequency-dependent fitness functions can be analyzed effectively using adaptive dynamics theory, assuming rare mutation and asexual reproduction. When possible mutations are restricted to certain directions due to developmental, physiological, or physical constraints, the resulting adaptive evolution may be restricted to subspaces (constraint surfaces) with fewer dimensionalities than the original trait spaces. To analyze such dynamics along constraint surfaces efficiently, we develop a Lagrange multiplier method in the framework of adaptive dynamics theory. On constraint surfaces of arbitrary dimensionalities described with equality constraints, our method efficiently finds local evolutionarily stable strategies, convergence stable points, and evolutionary branching points. We also derive the conditions for the existence of evolutionary branching points on constraint surfaces when the shapes of the surfaces can be chosen freely. Copyright © 2016 Elsevier Ltd. All rights reserved.
The tightly bound nuclei in the liquid drop model
NASA Astrophysics Data System (ADS)
Sree Harsha, N. R.
2018-05-01
In this paper, we shall maximise the binding energy per nucleon function in the semi-empirical mass formula of the liquid drop model of the atomic nuclei to analytically prove that the mean binding energy per nucleon curve has local extrema at A ≈ 58.6960, Z ≈ 26.3908 and at A ≈ 62.0178, Z ≈ 27.7506. The Lagrange method of multipliers is used to arrive at these results, while we have let the values of A and Z take continuous fractional values. The shell model that shows why 62Ni is the most tightly bound nucleus is outlined. A brief account on stellar nucleosynthesis is presented to show why 56Fe is more abundant than 62Ni and 58Fe. We believe that the analytical proof presented in this paper can be a useful tool to the instructors to introduce the nucleus with the highest mean binding energy per nucleon.
NASA Astrophysics Data System (ADS)
Capecelatro, Jesse
2018-03-01
It has long been suggested that a purely Lagrangian solution to global-scale atmospheric/oceanic flows can potentially outperform tradition Eulerian schemes. Meanwhile, a demonstration of a scalable and practical framework remains elusive. Motivated by recent progress in particle-based methods when applied to convection dominated flows, this work presents a fully Lagrangian method for solving the inviscid shallow water equations on a rotating sphere in a smooth particle hydrodynamics framework. To avoid singularities at the poles, the governing equations are solved in Cartesian coordinates, augmented with a Lagrange multiplier to ensure that fluid particles are constrained to the surface of the sphere. An underlying grid in spherical coordinates is used to facilitate efficient neighbor detection and parallelization. The method is applied to a suite of canonical test cases, and conservation, accuracy, and parallel performance are assessed.
DenInv3D: a geophysical software for three-dimensional density inversion of gravity field data
NASA Astrophysics Data System (ADS)
Tian, Yu; Ke, Xiaoping; Wang, Yong
2018-04-01
This paper presents a three-dimensional density inversion software called DenInv3D that operates on gravity and gravity gradient data. The software performs inversion modelling, kernel function calculation, and inversion calculations using the improved preconditioned conjugate gradient (PCG) algorithm. In the PCG algorithm, due to the uncertainty of empirical parameters, such as the Lagrange multiplier, we use the inflection point of the L-curve as the regularisation parameter. The software can construct unequally spaced grids and perform inversions using such grids, which enables changing the resolution of the inversion results at different depths. Through inversion of airborne gradiometry data on the Australian Kauring test site, we discovered that anomalous blocks of different sizes are present within the study area in addition to the central anomalies. The software of DenInv3D can be downloaded from http://159.226.162.30.
A Variational Method in Out-of-Equilibrium Physical Systems
Pinheiro, Mario J.
2013-01-01
We propose a new variational principle for out-of-equilibrium dynamic systems that are fundamentally based on the method of Lagrange multipliers applied to the total entropy of an ensemble of particles. However, we use the fundamental equation of thermodynamics on differential forms, considering U and S as 0-forms. We obtain a set of two first order differential equations that reveal the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. From this approach, a topological torsion current emerges of the form , where Aj and ωk denote the components of the vector potential (gravitational and/or electromagnetic) and where ω denotes the angular velocity of the accelerated frame. We derive a special form of the Umov-Poynting theorem for rotating gravito-electromagnetic systems. The variational method is then applied to clarify the working mechanism of particular devices. PMID:24316718
Solution procedure of dynamical contact problems with friction
NASA Astrophysics Data System (ADS)
Abdelhakim, Lotfi
2017-07-01
Dynamical contact is one of the common research topics because of its wide applications in the engineering field. The main goal of this work is to develop a time-stepping algorithm for dynamic contact problems. We propose a finite element approach for elastodynamics contact problems [1]. Sticking, sliding and frictional contact can be taken into account. Lagrange multipliers are used to enforce non-penetration condition. For the time discretization, we propose a scheme equivalent to the explicit Newmark scheme. Each time step requires solving a nonlinear problem similar to a static friction problem. The nonlinearity of the system of equation needs an iterative solution procedure based on Uzawa's algorithm [2][3]. The applicability of the algorithm is illustrated by selected sample numerical solutions to static and dynamic contact problems. Results obtained with the model have been compared and verified with results from an independent numerical method.
Sun, WaiChing; Cai, Zhijun; Choo, Jinhyun
2016-11-18
An Arlequin poromechanics model is introduced to simulate the hydro-mechanical coupling effects of fluid-infiltrated porous media across different spatial scales within a concurrent computational framework. A two-field poromechanics problem is first recast as the twofold saddle point of an incremental energy functional. We then introduce Lagrange multipliers and compatibility energy functionals to enforce the weak compatibility of hydro-mechanical responses in the overlapped domain. Here, to examine the numerical stability of this hydro-mechanical Arlequin model, we derive a necessary condition for stability, the twofold inf–sup condition for multi-field problems, and establish a modified inf–sup test formulated in the product space ofmore » the solution field. We verify the implementation of the Arlequin poromechanics model through benchmark problems covering the entire range of drainage conditions. Finally, through these numerical examples, we demonstrate the performance, robustness, and numerical stability of the Arlequin poromechanics model.« less
Modeling and Analysis of Power Processing Systems (MAPPS), initial phase 2
NASA Technical Reports Server (NTRS)
Yu, Y.; Lee, F. C.; Wangenheim, H.; Warren, D.
1977-01-01
The overall objective of the program is to provide the engineering tools to reduce the analysis, design, and development effort, and thus the cost, in achieving the required performances for switching regulators and dc-dc converter systems. The program was both tutorial and application oriented. Various analytical methods were described in detail and supplemented with examples, and those with standardization appeals were reduced into computer-based subprograms. Major program efforts included those concerning small and large signal control-dependent performance analysis and simulation, control circuit design, power circuit design and optimization, system configuration study, and system performance simulation. Techniques including discrete time domain, conventional frequency domain, Lagrange multiplier, nonlinear programming, and control design synthesis were employed in these efforts. To enhance interactive conversation between the modeling and analysis subprograms and the user, a working prototype of the Data Management Program was also developed to facilitate expansion as future subprogram capabilities increase.
Optimization of end-pumped, actively Q-switched quasi-III-level lasers.
Jabczynski, Jan K; Gorajek, Lukasz; Kwiatkowski, Jacek; Kaskow, Mateusz; Zendzian, Waldemar
2011-08-15
The new model of end-pumped quasi-III-level laser considering transient pumping processes, ground-state-depletion and up-conversion effects was developed. The model consists of two parts: pumping stage and Q-switched part, which can be separated in a case of active Q-switching regime. For pumping stage the semi-analytical model was developed, enabling the calculations for final occupation of upper laser level for given pump power and duration, spatial profile of pump beam, length and dopant level of gain medium. For quasi-stationary inversion, the optimization procedure of Q-switching regime based on Lagrange multiplier technique was developed. The new approach for optimization of CW regime of quasi-three-level lasers was developed to optimize the Q-switched lasers operating with high repetition rates. Both methods of optimizations enable calculation of optimal absorbance of gain medium and output losses for given pump rate. © 2011 Optical Society of America
Two neural network algorithms for designing optimal terminal controllers with open final time
NASA Technical Reports Server (NTRS)
Plumer, Edward S.
1992-01-01
Multilayer neural networks, trained by the backpropagation through time algorithm (BPTT), have been used successfully as state-feedback controllers for nonlinear terminal control problems. Current BPTT techniques, however, are not able to deal systematically with open final-time situations such as minimum-time problems. Two approaches which extend BPTT to open final-time problems are presented. In the first, a neural network learns a mapping from initial-state to time-to-go. In the second, the optimal number of steps for each trial run is found using a line-search. Both methods are derived using Lagrange multiplier techniques. This theoretical framework is used to demonstrate that the derived algorithms are direct extensions of forward/backward sweep methods used in N-stage optimal control. The two algorithms are tested on a Zermelo problem and the resulting trajectories compare favorably to optimal control results.
Aerodynamic design and optimization in one shot
NASA Technical Reports Server (NTRS)
Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.
1992-01-01
This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.
Learning Low-Rank Decomposition for Pan-Sharpening With Spatial-Spectral Offsets.
Yang, Shuyuan; Zhang, Kai; Wang, Min
2017-08-25
Finding accurate injection components is the key issue in pan-sharpening methods. In this paper, a low-rank pan-sharpening (LRP) model is developed from a new perspective of offset learning. Two offsets are defined to represent the spatial and spectral differences between low-resolution multispectral and high-resolution multispectral (HRMS) images, respectively. In order to reduce spatial and spectral distortions, spatial equalization and spectral proportion constraints are designed and cast on the offsets, to develop a spatial and spectral constrained stable low-rank decomposition algorithm via augmented Lagrange multiplier. By fine modeling and heuristic learning, our method can simultaneously reduce spatial and spectral distortions in the fused HRMS images. Moreover, our method can efficiently deal with noises and outliers in source images, for exploring low-rank and sparse characteristics of data. Extensive experiments are taken on several image data sets, and the results demonstrate the efficiency of the proposed LRP.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
Solar Corona Simulation Model With Positivity-preserving Property
NASA Astrophysics Data System (ADS)
Feng, X. S.
2015-12-01
Positivity-preserving is one of crucial problems in solar corona simulation. In such numerical simulation of low plasma β region, keeping density and pressure is a first of all matter to obtain physical sound solution. In the present paper, we utilize the maximum-principle-preserving flux limiting technique to develop a class of second order positivity-preserving Godunov finite volume HLL methods for the solar wind plasma MHD equations. Based on the underlying first order building block of positivity preserving Lax-Friedrichs, our schemes, under the constrained transport (CT) and generalized Lagrange multiplier (GLM) framework, can achieve high order accuracy, a discrete divergence-free condition and positivity of the numerical solution simultaneously without extra CFL constraints. Numerical results in four Carrington rotation during the declining, rising, minimum and maximum solar activity phases are provided to demonstrate the performance of modeling small plasma beta with positivity-preserving property of the proposed method.
Rate and power efficient image compressed sensing and transmission
NASA Astrophysics Data System (ADS)
Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan
2016-01-01
This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.
Optimizing Retransmission Threshold in Wireless Sensor Networks
Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang
2016-01-01
The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092
Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei
2016-01-01
In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network’s performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks. PMID:27483282
Efficient algorithms and implementations of entropy-based moment closures for rarefied gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel
We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less
Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei
2016-07-30
In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network's performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks.
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mignone, A.; Tzeferacos, P.; Zanni, C.
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory,more » or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.« less
Moving object detection via low-rank total variation regularization
NASA Astrophysics Data System (ADS)
Wang, Pengcheng; Chen, Qian; Shao, Na
2016-09-01
Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.
Mechanical behavior of cells in microinjection: a minimum potential energy study.
Liu, Fei; Wu, Dan; Chen, Ken
2013-08-01
Microinjection is a widely used technique to deliver foreign materials into biological cells. We propose a mathematical model to study the mechanical behavior of a cell in microinjection. Firstly, a cell is modeled by a hyperelastic membrane and interior cytoplasm. Then, based on the fact that the equilibrium configuration of a cell would minimize the potential energy, the energy function during microinjection is analyzed. With Lagrange multiplier and Rayleigh-Ritz technique, we successfully minimize the potential energy and obtain the equilibrium configuration. Upon this model, the injection force, the injection distance, the radius of the microinjector and the membrane stress are studied. The analysis demonstrates that the microinjector radius has a significant influence on the cell mechanical behavior: (1) the larger radius generates larger injection force and larger interior pressure at the same injection distance; (2) the radius determines the place where the membrane is most likely to rupture by governing the membrane stress distribution. For a fine microinjector with radius less than 20% of the cell radius, the most likely rupture point located at the edge of the contact area between the microinjector and the membrane; however, it may move to the middle of the equilibrium configuration as the radius increases. To verify our model, some experiments were conducted on zebrafish egg cells. The results show that the computational analysis agrees with the experimental data, which supports the findings from the theoretical model. Copyright © 2013 Elsevier Ltd. All rights reserved.
Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.
2010-01-01
The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808
Multigrid one shot methods for optimal control problems: Infinite dimensional control
NASA Technical Reports Server (NTRS)
Arian, Eyal; Taasan, Shlomo
1994-01-01
The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.; Ochs, Harry T., III
1988-01-01
The variational method of undetermined multipliers is used to derive a multivariate model for objective analysis. The model is intended for the assimilation of 3-D fields of rawinsonde height, temperature and wind, and mean level temperature observed by satellite into a dynamically consistent data set. Relative measurement errors are taken into account. The dynamic equations are the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation. The model Euler-Lagrange equations are eleven linear and/or nonlinear partial differential and/or algebraic equations. A cyclical solution sequence is described. Other model features include a nonlinear terrain-following vertical coordinate that eliminates truncation error in the pressure gradient terms of the horizontal momentum equations and easily accommodates satellite observed mean layer temperatures in the middle and upper troposphere. A projection of the pressure gradient onto equivalent pressure surfaces removes most of the adverse impacts of the lower coordinate surface on the variational adjustment.
Stress Analysis of Composite Cylindrical Shells with an Elliptical Cutout
NASA Technical Reports Server (NTRS)
Oterkus, E.; Madenci, E.; Nemeth, M. P.
2007-01-01
A special-purpose, semi-analytical solution method for determining the stress and deformation fields in a thin laminated-composite cylindrical shell with an elliptical cutout is presented. The analysis includes the effects of cutout size, shape, and orientation; non-uniform wall thickness; oval-cross-section eccentricity; and loading conditions. The loading conditions include uniform tension, uniform torsion, and pure bending. The analysis approach is based on the principle of stationary potential energy and uses Lagrange multipliers to relax the kinematic admissibility requirements on the displacement representations through the use of idealized elastic edge restraints. Specifying appropriate stiffness values for the elastic extensional and rotational edge restraints (springs) allows the imposition of the kinematic boundary conditions in an indirect manner, which enables the use of a broader set of functions for representing the displacement fields. Selected results of parametric studies are presented for several geometric parameters that demonstrate that analysis approach is a powerful means for developing design criteria for laminated-composite shells.
NASA Astrophysics Data System (ADS)
Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu
2016-05-01
The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.
NASA Astrophysics Data System (ADS)
Sulistianingsih, E.; Kiftiah, M.; Rosadi, D.; Wahyuni, H.
2017-04-01
Gross Domestic Product (GDP) is an indicator of economic growth in a region. GDP is a panel data, which consists of cross-section and time series data. Meanwhile, panel regression is a tool which can be utilised to analyse panel data. There are three models in panel regression, namely Common Effect Model (CEM), Fixed Effect Model (FEM) and Random Effect Model (REM). The models will be chosen based on results of Chow Test, Hausman Test and Lagrange Multiplier Test. This research analyses palm oil about production, export, and government consumption to five district GDP are in West Kalimantan, namely Sanggau, Sintang, Sambas, Ketapang and Bengkayang by panel regression. Based on the results of analyses, it concluded that REM, which adjusted-determination-coefficient is 0,823, is the best model in this case. Also, according to the result, only Export and Government Consumption that influence GDP of the districts.
Stochastic Routing and Scheduling Policies for Energy Harvesting Communication Networks
NASA Astrophysics Data System (ADS)
Calvo-Fullana, Miguel; Anton-Haro, Carles; Matamoros, Javier; Ribeiro, Alejandro
2018-07-01
In this paper, we study the joint routing-scheduling problem in energy harvesting communication networks. Our policies, which are based on stochastic subgradient methods on the dual domain, act as an energy harvesting variant of the stochastic family of backpresure algorithms. Specifically, we propose two policies: (i) the Stochastic Backpressure with Energy Harvesting (SBP-EH), in which a node's routing-scheduling decisions are determined by the difference between the Lagrange multipliers associated to their queue stability constraints and their neighbors'; and (ii) the Stochastic Soft Backpressure with Energy Harvesting (SSBP-EH), an improved algorithm where the routing-scheduling decision is of a probabilistic nature. For both policies, we show that given sustainable data and energy arrival rates, the stability of the data queues over all network nodes is guaranteed. Numerical results corroborate the stability guarantees and illustrate the minimal gap in performance that our policies offer with respect to classical ones which work with an unlimited energy supply.
Rate-independent dissipation in phase-field modelling of displacive transformations
NASA Astrophysics Data System (ADS)
Tůma, K.; Stupkiewicz, S.; Petryk, H.
2018-05-01
In this paper, rate-independent dissipation is introduced into the phase-field framework for modelling of displacive transformations, such as martensitic phase transformation and twinning. The finite-strain phase-field model developed recently by the present authors is here extended beyond the limitations of purely viscous dissipation. The variational formulation, in which the evolution problem is formulated as a constrained minimization problem for a global rate-potential, is enhanced by including a mixed-type dissipation potential that combines viscous and rate-independent contributions. Effective computational treatment of the resulting incremental problem of non-smooth optimization is developed by employing the augmented Lagrangian method. It is demonstrated that a single Lagrange multiplier field suffices to handle the dissipation potential vertex and simultaneously to enforce physical constraints on the order parameter. In this way, the initially non-smooth problem of evolution is converted into a smooth stationarity problem. The model is implemented in a finite-element code and applied to solve two- and three-dimensional boundary value problems representative for shape memory alloys.
A model for wave propagation in a porous solid saturated by a three-phase fluid.
Santos, Juan E; Savioli, Gabriela B
2016-02-01
This paper presents a model to describe the propagation of waves in a poroelastic medium saturated by a three-phase viscous, compressible fluid. Two capillary relations between the three fluid phases are included in the model by introducing Lagrange multipliers in the principle of virtual complementary work. This approach generalizes that of Biot for single-phase fluids and allows to determine the strain energy density, identify the generalized strains and stresses, and derive the constitutive relations of the system. The kinetic and dissipative energy density functions are obtained assuming that the relative flow within the pore space is of laminar type and obeys Darcy's law for three-phase flow in porous media. After deriving the equations of motion, a plane wave analysis predicts the existence of four compressional waves, denoted as type I, II, III, and IV waves, and one shear wave. Numerical examples showing the behavior of all waves as function of saturation and frequency are presented.
Stress Analysis of Composite Cylindrical Shells With an Elliptical Cutout
NASA Technical Reports Server (NTRS)
Nemeth, M. P.; Oterkus, E.; Madenci, E.
2005-01-01
A special-purpose, semi-analytical solution method for determining the stress and deformation fields in a thin laminated-composite cylindrical shell with an elliptical cutout is presented. The analysis includes the effects of cutout size, shape, and orientation; nonuniform wall thickness; oval-cross-section eccentricity; and loading conditions. The loading conditions include uniform tension, uniform torsion, and pure bending. The analysis approach is based on the principle of stationary potential energy and uses Lagrange multipliers to relax the kinematic admissibility requirements on the displacement representations through the use of idealized elastic edge restraints. Specifying appropriate stiffness values for the elastic extensional and rotational edge restraints (springs) allows the imposition of the kinematic boundary conditions in an indirect manner, which enables the use of a broader set of functions for representing the displacement fields. Selected results of parametric studies are presented for several geometric parameters that demonstrate that analysis approach is a powerful means for developing design criteria for laminated-composite shells.
An inequality for detecting financial fraud, derived from the Markowitz Optimal Portfolio Theory
NASA Astrophysics Data System (ADS)
Bard, Gregory V.
2016-12-01
The Markowitz Optimal Portfolio Theory, published in 1952, is well-known, and was often taught because it blends Lagrange Multipliers, matrices, statistics, and mathematical finance. However, the theory faded from prominence in American investing, as Business departments at US universities shifted from techniques based on mathematics, finance, and statistics, to focus instead on leadership, public speaking, interpersonal skills, advertising, etc… The author proposes a new application of Markowitz's Theory: the detection of a fairly broad category of financial fraud (called "Ponzi schemes" in American newspapers) by looking at a particular inequality derived from the Markowitz Optimal Portfolio Theory, relating volatility and expected rate of return. For example, one recent Ponzi scheme was that of Bernard Madoff, uncovered in December 2008, which comprised fraud totaling 64,800,000,000 US dollars [23]. The objective is to compare investments with the "efficient frontier" as predicted by Markowitz's theory. Violations of the inequality should be impossible in theory; therefore, in practice, violations might indicate fraud.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu; Phanish, Deepa
We present an Augmented Lagrangian formulation and its real-space implementation for non-periodic Orbital-Free Density Functional Theory (OF-DFT) calculations. In particular, we rewrite the constrained minimization problem of OF-DFT as a sequence of minimization problems without any constraint, thereby making it amenable to powerful unconstrained optimization algorithms. Further, we develop a parallel implementation of this approach for the Thomas–Fermi–von Weizsacker (TFW) kinetic energy functional in the framework of higher-order finite-differences and the conjugate gradient method. With this implementation, we establish that the Augmented Lagrangian approach is highly competitive compared to the penalty and Lagrange multiplier methods. Additionally, we show that higher-ordermore » finite-differences represent a computationally efficient discretization for performing OF-DFT simulations. Overall, we demonstrate that the proposed formulation and implementation are both efficient and robust by studying selected examples, including systems consisting of thousands of atoms. We validate the accuracy of the computed energies and forces by comparing them with those obtained by existing plane-wave methods.« less
NASA Astrophysics Data System (ADS)
Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic
2017-03-01
This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.
Randomly Sampled-Data Control Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
Finite dimensional approximation of a class of constrained nonlinear optimal control problems
NASA Technical Reports Server (NTRS)
Gunzburger, Max D.; Hou, L. S.
1994-01-01
An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.
Geometric constrained variational calculus I: Piecewise smooth extremals
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2015-05-01
A geometric setup for constrained variational calculus is presented. The analysis deals with the study of the extremals of an action functional defined on piecewise differentiable curves, subject to differentiable, non-holonomic constraints. Special attention is paid to the tensorial aspects of the theory. As far as the kinematical foundations are concerned, a fully covariant scheme is developed through the introduction of the concept of infinitesimal control. The standard classification of the extremals into normal and abnormal ones is discussed, pointing out the existence of an algebraic algorithm assigning to each admissible curve a corresponding abnormality index, related to the co-rank of a suitable linear map. Attention is then shifted to the study of the first variation of the action functional. The analysis includes a revisitation of Pontryagin's equations and of the Lagrange multipliers method, as well as a reformulation of Pontryagin's algorithm in Hamiltonian terms. The analysis is completed by a general result, concerning the existence of finite deformations with fixed endpoints.
Combinatorics of least-squares trees.
Mihaescu, Radu; Pachter, Lior
2008-09-09
A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.
Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization
NASA Astrophysics Data System (ADS)
Zhang, Tao; Tang, Zhenmin; Liu, Qing
2017-05-01
Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Wang, Xiaoqiang; Ju, Lili; Du, Qiang
2016-07-01
The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.
A novel Lagrangian approach for the stable numerical simulation of fault and fracture mechanics
NASA Astrophysics Data System (ADS)
Franceschini, Andrea; Ferronato, Massimiliano; Janna, Carlo; Teatini, Pietro
2016-06-01
The simulation of the mechanics of geological faults and fractures is of paramount importance in several applications, such as ensuring the safety of the underground storage of wastes and hydrocarbons or predicting the possible seismicity triggered by the production and injection of subsurface fluids. However, the stable numerical modeling of ground ruptures is still an open issue. The present work introduces a novel formulation based on the use of the Lagrange multipliers to prescribe the constraints on the contact surfaces. The variational formulation is modified in order to take into account the frictional work along the activated fault portion according to the principle of maximum plastic dissipation. The numerical model, developed in the framework of the Finite Element method, provides stable solutions with a fast convergence of the non-linear problem. The stabilizing properties of the proposed model are emphasized with the aid of a realistic numerical example dealing with the generation of ground fractures due to groundwater withdrawal in arid regions.
On non-parametric maximum likelihood estimation of the bivariate survivor function.
Prentice, R L
The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
Ellipticity-dependent of multiple ionisation methyl iodide cluster using 532 nm nanosecond laser
NASA Astrophysics Data System (ADS)
Tang, Bin; Zhao, Wuduo; Wang, Weiguo; Hua, Lei; Chen, Ping; Hou, Keyong; Huang, Yunguang; Li, Haiyang
2016-03-01
The dependence of multiply charged ions on laser ellipticity in methyl iodide clusters with 532 nm nanosecond laser was measured using a time-of-flight mass spectrometer. The intensities of multiply charged ions Iq+(q = 2-4) with circularly polarised laser pulse were clearly higher than those with linearly polarised laser pulse but the intensity of single charged ions I+ was inverse. And the dependences of ions on the optical polarisation state were investigated and a flower petal and square distribution for single charged ions (I+, C+) and multiply charged ions (I2+, I3+, I4+, C2+) were observed, respectively. A theoretical calculation was also proposed to simulate the distributions of ions and theoretical results fitted well with the experimental ones. It indicated that the high multiphoton ionisation probability in the initial stage would result in the disintegration of big clusters into small ones and suppress the production of multiply charged ions.
Blind deconvolution of 2-D and 3-D fluorescent micrographs
NASA Astrophysics Data System (ADS)
Krishnamurthi, Vijaykumar; Liu, Yi-Hwa; Holmes, Timothy J.; Roysam, Badrinath; Turner, James N.
1992-06-01
This paper presents recent results of our reconstructions of 3-D data from Drosophila chromosomes as well as our simulations with a refined version of the algorithm used in the former. It is well known that the calibration of the point spread function (PSF) of a fluorescence microscope is a tedious process and involves esoteric techniques in most cases. This problem is further compounded in the case of confocal microscopy where the measured intensities are usually low. A number of techniques have been developed to solve this problem, all of which are methods in blind deconvolution. These are so called because the measured PSF is not required in the deconvolution of degraded images from any optical system. Our own efforts in this area involved the maximum likelihood (ML) method, the numerical solution to which is obtained by the expectation maximization (EM) algorithm. Based on the reasonable early results obtained during our simulations with 2-D phantoms, we carried out experiments with real 3-D data. We found that the blind deconvolution method using the ML approach gave reasonable reconstructions. Next we tried to perform the reconstructions using some 2-D data, but we found that the results were not encouraging. We surmised that the poor reconstructions were primarily due to the large values of dark current in the input data. This, coupled with the fact that we are likely to have similar data with considerable dark current from a confocal microscope prompted us to look into ways of constraining the solution of the PSF. We observed that in the 2-D case, the reconstructed PSF has a tendency to retain values larger than those of the theoretical PSF in regions away from the center (outside of those we considered to be its region of support). This observation motivated us to apply an upper bound constraint on the PSF in these regions. Furthermore, we constrain the solution of the PSF to be a bandlimited function, as in the case in the true situation. We have derived two separate approaches for implementing the constraint. One approach involves the mathematical rigors of Lagrange multipliers. This approach is discussed in another paper. The second approach involves an adaptation of the Gershberg Saxton algorithm, which ensures bandlimitedness and non-negativity of the PSF. Although the latter approach is mathematically less rigorous than the former, we currently favor it because it has a simpler implementation on a computer and has smaller memory requirements. The next section describes briefly the theory and derivation of these constraint equations using Lagrange multipliers.
Compression strength of composite primary structural components
NASA Technical Reports Server (NTRS)
Johnson, Eric R.
1994-01-01
The linear elastic response is determined for an internally pressurized, long circular cylindrical shell stiffened on the inside by a regular arrangement of identical stringers and identical rings. Periodicity of this configuration permits the analysis of a portion of the shell wall centered over a generic stringer-ring joint; i.e., a unit cell model. The stiffeners are modeled as discrete beams, and the stringer is assumed to have a symmetrical cross section and the ring an asymmetrical section. Asymmetery causes out-of-plane bending and torsion of the ring. Displacements are assumed as truncated double Fourier series plus simple terms in the axial coordinate to account for the closed and pressure vessel effect (a non-periodic effect). The interacting line loads between the stiffeners and the inside shell wall are Lagrange multipliers in the formulation, and they are also assumed as truncated Fourier series. Displacement continuity constraints between the stiffeners and shell along the contact lines are satisfied point-wise. Equilibrium is imposed by the principle of virtual work. A composite material crown panel from the fuselage of a large transport aircraft is the numerical example. The distributions of the interacting line loads, and the out-of-plane bending moment and torque in the ring, are strongly dependent on modeling the deformations due to transverse shear and cross-sectional warping of the ring in torsion. This paper contains the results from the semiannual report on research on 'Pressure Pillowing of an Orthogonally Stiffened Cylindrical Shell'. The results of the new work are illustrated in the included appendix.
NASA Astrophysics Data System (ADS)
Lv, Gangming; Zhu, Shihua; Hui, Hui
Multi-cell resource allocation under minimum rate request for each user in OFDMA networks is addressed in this paper. Based on Lagrange dual decomposition theory, the joint multi-cell resource allocation problem is decomposed and modeled as a limited-cooperative game, and a distributed multi-cell resource allocation algorithm is thus proposed. Analysis and simulation results show that, compared with non-cooperative iterative water-filling algorithm, the proposed algorithm can remarkably reduce the ICI level and improve overall system performances.
What Did We Think Could Be Learned About Earth From Lagrange Point Observations?
NASA Technical Reports Server (NTRS)
Wiscombe, Warren
2011-01-01
The scientific excitement surrounding the NASA Lagrange point mission Triana, now called DSCOVR, tended to be forgotten in the brouhaha over other aspects of the mission. Yet a small band of scientists in 1998 got very excited about the possibilities offered by the Lagrange-point perspective on our planet. As one of the original co-investigators on the Triana mission, I witnessed that scientific excitement firsthand. I will bring to life the early period, circa 1998 to 2000, and share the reasons that we thought the Lagrange-point perspective on Earth would be scientifically revolutionary.
NASA Astrophysics Data System (ADS)
Albersen, Peter J.; Houba, Harold E. D.; Keyzer, Michiel A.
A general approach is presented to value the stocks and flows of water as well as the physical structure of the basin on the basis of an arbitrary process-based hydrological model. This approach adapts concepts from the economic theory of capital accumulation, which are based on Lagrange multipliers that reflect market prices in the absence of markets. This permits to derive a financial account complementing the water balance in which the value of deliveries by the hydrological system fully balances with the value of resources, including physical characteristics reflected in the shape of the functions in the model. The approach naturally suggests the use of numerical optimization software to compute the multipliers, without the need to impose an immensely large number of small perturbations on the simulation model, or to calculate all derivatives analytically. A novel procedure is proposed to circumvent numerical problems in computation and it is implemented in a numerical application using AQUA, an existing model of the Upper-Zambezi River. It appears, not unexpectedly, that most end value accrues to agriculture. Irrigated agriculture receives a remarkably large share, and is by far the most rewarding activity. Furthermore, according to the model, the economic value would be higher if temperature was lower, pointing to the detrimental effect of climate change. We also find that a significant economic value is stored in the groundwater stock because of its critical role in the dry season. As groundwater comes out as the main capital of the basin, its mining could be harmful.
Rosen, Joseph; Kelner, Roy
2014-11-17
The Lagrange invariant is a well-known law for optical imaging systems formulated in the frame of ray optics. In this study, we reformulate this law in terms of wave optics and relate it to the resolution limits of various imaging systems. Furthermore, this modified Lagrange invariant is generalized for imaging along the z axis, resulting with the axial Lagrange invariant which can be used to analyze the axial resolution of various imaging systems. To demonstrate the effectiveness of the theory, analysis of the lateral and the axial imaging resolutions is provided for Fresnel incoherent correlation holography (FINCH) systems.
Wrapping conformations of a polymer on a curved surface
NASA Astrophysics Data System (ADS)
Lin, Cheng-Hsiao; Tsai, Yan-Chr; Hu, Chin-Kun
2007-03-01
The conformation of a polymer on a curved surface is high on the agenda for polymer science. We assume that the free energy of the system is the sum of bending energy of the polymer and the electrostatic attraction between the polymer and surface. As is also assumed, the polymer is very stiff with an invariant length for each segment so that we can neglect its tensile energy and view its length as a constant. Based on the principle of minimization of free energy, we apply a variation method with a locally undetermined Lagrange multiplier to obtain a set of equations for the polymer conformation in terms of local geometrical quantities. We have obtained some numerical solutions for the conformations of the polymer chain on cylindrical and ellipsoidal surfaces. With some boundary conditions, we find that the free energy profiles of polymer chains behave differently and depend on the geometry of the surface for both cases. In the former case, the free energy of each segment distributes within a narrower range and its value per unit length oscillates almost periodically in the azimuthal angle. However, in the latter case the free energy distributes in a wider range with larger value at both ends and smaller value in the middle of the chain. The structure of a polymer wrapping around an ellipsoidal surface is apt to dewrap a polymer from the endpoints. The dependence of threshold lengths for a polymer on the initially anchored positions is also investigated. With initial conditions, the threshold wrapping length is found to increase with the electrostatic attraction strength for the ellipsoidal surface case. When a polymer wraps around a sphere surface, the threshold length increases monotonically with the radius without the self-intersection configuration for a polymer. We also discuss potential applications of the present theory to DNA/protein complex and further researches on DNA on the curved surface.
Work and heat fluctuations in two-state systems: a trajectory thermodynamics formalism
NASA Astrophysics Data System (ADS)
Ritort, F.
2004-10-01
Two-state models provide phenomenological descriptions of many different systems, ranging from physics to chemistry and biology. We investigate work fluctuations in an ensemble of two-state systems driven out of equilibrium under the action of an external perturbation. We calculate the probability density PN(W) that work equal to W is exerted upon the system (of size N) along a given non-equilibrium trajectory and introduce a trajectory thermodynamics formalism to quantify work fluctuations in the large-N limit. We then define a trajectory entropy SN(W) that counts the number of non-equilibrium trajectories PN(W) = exp(SN(W)/kBT) with work equal to W and characterizes fluctuations of work trajectories around the most probable value Wmp. A trajectory free energy {\\cal F}_N(W) can also be defined, which has a minimum at W = W†, this being the value of the work that has to be efficiently sampled to quantitatively test the Jarzynski equality. Within this formalism a Lagrange multiplier is also introduced, the inverse of which plays the role of a trajectory temperature. Our general solution for PN(W) exactly satisfies the fluctuation theorem by Crooks and allows us to investigate heat fluctuations for a protocol that is invariant under time reversal. The heat distribution is then characterized by a Gaussian component (describing small and frequent heat exchange events) and exponential tails (describing the statistics of large deviations and rare events). For the latter, the width of the exponential tails is related to the aforementioned trajectory temperature. Finite-size effects to the large-N theory and the recovery of work distributions for finite N are also discussed. Finally, we pay particular attention to the case of magnetic nanoparticle systems under the action of a magnetic field H where work and heat fluctuations are predicted to be observable in ramping experiments in micro-SQUIDs.
Efficient dynamic modeling of manipulators containing closed kinematic loops
NASA Astrophysics Data System (ADS)
Ferretti, Gianni; Rocco, Paolo
An approach to efficiently solve the forward dynamics problem for manipulators containing closed chains is proposed. The two main distinctive features of this approach are: the dynamics of the equivalent open loop tree structures (any closed loop can be in general modeled by imposing some additional kinematic constraints to a suitable tree structure) is computed through an efficient Newton Euler formulation; the constraint equations relative to the most commonly adopted closed chains in industrial manipulators are explicitly solved, thus, overcoming the redundancy of Lagrange's multipliers method while avoiding the inefficiency due to a numerical solution of the implicit constraint equations. The constraint equations considered for an explicit solution are those imposed by articulated gear mechanisms and planar closed chains (pantograph type structures). Articulated gear mechanisms are actually used in all industrial robots to transmit motion from actuators to links, while planar closed chains are usefully employed to increase the stiffness of the manipulators and their load capacity, as well to reduce the kinematic coupling of joint axes. The accuracy and the efficiency of the proposed approach are shown through a simulation test.
Force Analysis and Energy Operation of Chaotic System of Permanent-Magnet Synchronous Motor
NASA Astrophysics Data System (ADS)
Qi, Guoyuan; Hu, Jianbing
2017-12-01
The disadvantage of a nondimensionalized model of a permanent-magnet synchronous Motor (PMSM) is identified. The original PMSM model is transformed into a Kolmogorov system to aid dynamic force analysis. The vector field of the PMSM is analogous to the force field including four types of torque — inertial, internal, dissipative, and generalized external. Using the feedback thought, the error torque between external torque and dissipative torque is identified. The pitchfork bifurcation of the PMSM is performed. Four forms of energy are identified for the system — kinetic, potential, dissipative, and supplied. The physical interpretations of the decomposition of force and energy exchange are given. Casimir energy is stored energy, and its rate of change is the error power between the dissipative energy and the energy supplied to the motor. Error torque and error power influence the different types of dynamic modes. The Hamiltonian energy and Casimir energy are compared to find the function of each in producing the dynamic modes. A supremum bound for the chaotic attractor is proposed using the error power and Lagrange multiplier.
Cross-layer Joint Relay Selection and Power Allocation Scheme for Cooperative Relaying System
NASA Astrophysics Data System (ADS)
Zhi, Hui; He, Mengmeng; Wang, Feiyue; Huang, Ziju
2018-03-01
A novel cross-layer joint relay selection and power allocation (CL-JRSPA) scheme over physical layer and data-link layer is proposed for cooperative relaying system in this paper. Our goal is finding the optimal relay selection and power allocation scheme to maximize system achievable rate when satisfying total transmit power constraint in physical layer and statistical delay quality-of-service (QoS) demand in data-link layer. Using the concept of effective capacity (EC), our goal can be formulated into an optimal joint relay selection and power allocation (JRSPA) problem to maximize the EC when satisfying total transmit power limitation. We first solving optimal power allocation (PA) problem with Lagrange multiplier approach, and then solving optimal relay selection (RS) problem. Simulation results demonstrate that CL-JRSPA scheme gets larger EC than other schemes when satisfying delay QoS demand. In addition, the proposed CL-JRSPA scheme achieves the maximal EC when relay located approximately halfway between source and destination, and EC becomes smaller when the QoS exponent becomes larger.
A fictitious domain approach for the simulation of dense suspensions
NASA Astrophysics Data System (ADS)
Gallier, Stany; Lemaire, Elisabeth; Lobry, Laurent; Peters, François
2014-01-01
Low Reynolds number concentrated suspensions do exhibit an intricate physics which can be partly unraveled by the use of numerical simulation. To this end, a Lagrange multiplier-free fictitious domain approach is described in this work. Unlike some methods recently proposed, the present approach is fully Eulerian and therefore does not need any transfer between the Eulerian background grid and some Lagrangian nodes attached to particles. Lubrication forces between particles play an important role in the suspension rheology and have been properly accounted for in the model. A robust and effective lubrication scheme is outlined which consists in transposing the classical approach used in Stokesian Dynamics to our present direct numerical simulation. This lubrication model has also been adapted to account for solid boundaries such as walls. Contact forces between particles are modeled using a classical Discrete Element Method (DEM), a widely used method in granular matter physics. Comprehensive validations are presented on various one-particle, two-particle or three-particle configurations in a linear shear flow as well as some O(103) and O(104) particle simulations.
On the critical forcing amplitude of forced nonlinear oscillators
NASA Astrophysics Data System (ADS)
Febbo, Mariano; Ji, Jinchen C.
2013-12-01
The steady-state response of forced single degree-of-freedom weakly nonlinear oscillators under primary resonance conditions can exhibit saddle-node bifurcations, jump and hysteresis phenomena, if the amplitude of the excitation exceeds a certain value. This critical value of excitation amplitude or critical forcing amplitude plays an important role in determining the occurrence of saddle-node bifurcations in the frequency-response curve. This work develops an alternative method to determine the critical forcing amplitude for single degree-of-freedom nonlinear oscillators. Based on Lagrange multipliers approach, the proposed method considers the calculation of the critical forcing amplitude as an optimization problem with constraints that are imposed by the existence of locations of vertical tangency. In comparison with the Gröbner basis method, the proposed approach is more straightforward and thus easy to apply for finding the critical forcing amplitude both analytically and numerically. Three examples are given to confirm the validity of the theoretical predictions. The first two present the analytical form for the critical forcing amplitude and the third one is an example of a numerically computed solution.
Simulating squeeze flows in multiaxial laminates using an improved TIF model
NASA Astrophysics Data System (ADS)
Ibañez, R.; Abisset-Chavanne, Emmanuelle; Chinesta, Francisco
2017-10-01
Thermoplastic composites are widely considered in structural parts. In this paper attention is paid to squeeze flow of continuous fiber laminates. In the case of unidirectional prepregs, the ply constitutive equation is modeled as a transversally isotropic fluid, that must satisfy both the fiber inextensibility as well as the fluid incompressibility. When laminate is squeezed the flow kinematics exhibits a complex dependency along the laminate thickness requiring a detailed velocity description through the thickness. In a former work the solution making use of an in-plane-out-of-plane separated representation within the PGD - Poper Generalized Decomposition - framework was successfully accomplished when both kinematic constraints (inextensibility and in-compressibility) were introduced using a penalty formulation for circumventing the LBB constraints. However, such a formulation makes difficult the calculation on fiber tractions and compression forces, the last required in rheological characterizations. In this paper the former penalty formulation is substituted by a mixed formulation that makes use of two Lagrange multipliers, while addressing the LBB stability conditions within the separated representation framework, questions never until now addressed.
Modeling the kinematics of multi-axial composite laminates as a stacking of 2D TIF plies
NASA Astrophysics Data System (ADS)
Ibañez, Ruben; Abisset-Chavanne, Emmanuelle; Chinesta, Francisco; Huerta, Antonio
2016-10-01
Thermoplastic composites are widely considered in structural parts. In this paper attention is paid to sheet forming of continuous fiber laminates. In the case of unidirectional prepregs, the ply constitutive equation is modeled as a transversally isotropic fluid, that must satisfy both the fiber inextensibility as well as the fluid incompressibility. When the stacking sequence involves plies with different orientations the kinematics of each ply during the laminate deformation varies significantly through the composite thickness. In our former works we considered two different approaches when simulating the squeeze flow induced by the laminate compression, the first based on a penalty formulation and the second one based on the use of Lagrange multipliers. In the present work we propose an alternative approach that consists in modeling each ply involved in the laminate as a transversally isotropic fluid - TIF - that becomes 2D as soon as incompressibility constraint and plane stress assumption are taken into account. Thus, composites laminates can be analyzed as a stacking of 2D TIF models that could eventually interact by using adequate friction laws at the inter-ply interfaces.
Some novel features in 2D non-Abelian theory: BRST approach
NASA Astrophysics Data System (ADS)
Srinivas, N.; Kumar, S.; Kureel, B. K.; Malik, R. P.
2017-08-01
Within the framework of Becchi-Rouet-Stora-Tyutin (BRST) formalism, we discuss some novel features of a two (1+1)-dimensional (2D) non-Abelian 1-form gauge theory (without any interaction with matter fields). Besides the usual off-shell nilpotent and absolutely anticommutating (anti-)BRST symmetry transformations, we discuss the off-shell nilpotent and absolutely anticommutating (anti-)co-BRST symmetry transformations. Particularly, we lay emphasis on the existence of the coupled (but equivalent) Lagrangian densities of the 2D non-Abelian theory in view of the presence of (anti-)co-BRST symmetry transformations where we pin-point some novel features associated with the Curci-Ferrari (CF-)type restrictions. We demonstrate that these CF-type restrictions can be incorporated into the (anti-)co-BRST invariant Lagrangian densities through the fermionic Lagrange multipliers which carry specific ghost numbers. The modified versions of the Lagrangian densities (where we get rid of the new CF-type restrictions) respect some precise symmetries as well as a couple of symmetries with CF-type constraints. These observations are completely novel as far as the BRST formalism, with proper (anti-)co-BRST symmetries, is concerned.
STM contrast of a CO dimer on a Cu(1 1 1) surface: a wave-function analysis.
Gustafsson, Alexander; Paulsson, Magnus
2017-12-20
We present a method used to intuitively interpret the scanning tunneling microscopy (STM) contrast by investigating individual wave functions originating from the substrate and tip side. We use localized basis orbital density functional theory, and propagate the wave functions into the vacuum region at a real-space grid, including averaging over the lateral reciprocal space. Optimization by means of the method of Lagrange multipliers is implemented to perform a unitary transformation of the wave functions in the middle of the vacuum region. The method enables (i) reduction of the number of contributing tip-substrate wave function combinations used in the corresponding transmission matrix, and (ii) to bundle up wave functions with similar symmetry in the lateral plane, so that (iii) an intuitive understanding of the STM contrast can be achieved. The theory is applied to a CO dimer adsorbed on a Cu(1 1 1) surface scanned by a single-atom Cu tip, whose STM image is discussed in detail by the outlined method.
Upper Limits for Power Yield in Thermal, Chemical, and Electrochemical Systems
NASA Astrophysics Data System (ADS)
Sieniutycz, Stanislaw
2010-03-01
We consider modeling and power optimization of energy converters, such as thermal, solar and chemical engines and fuel cells. Thermodynamic principles lead to expressions for converter's efficiency and generated power. Efficiency equations serve to solve the problems of upgrading or downgrading a resource. Power yield is a cumulative effect in a system consisting of a resource, engines, and an infinite bath. While optimization of steady state systems requires using the differential calculus and Lagrange multipliers, dynamic optimization involves variational calculus and dynamic programming. The primary result of static optimization is the upper limit of power, whereas that of dynamic optimization is a finite-rate counterpart of classical reversible work (exergy). The latter quantity depends on the end state coordinates and a dissipation index, h, which is the Hamiltonian of the problem of minimum entropy production. In reacting systems, an active part of chemical affinity constitutes a major component of the overall efficiency. The theory is also applied to fuel cells regarded as electrochemical flow engines. Enhanced bounds on power yield follow, which are stronger than those predicted by the reversible work potential.
Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio
2015-12-01
This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.
STM contrast of a CO dimer on a Cu(1 1 1) surface: a wave-function analysis
NASA Astrophysics Data System (ADS)
Gustafsson, Alexander; Paulsson, Magnus
2017-12-01
We present a method used to intuitively interpret the scanning tunneling microscopy (STM) contrast by investigating individual wave functions originating from the substrate and tip side. We use localized basis orbital density functional theory, and propagate the wave functions into the vacuum region at a real-space grid, including averaging over the lateral reciprocal space. Optimization by means of the method of Lagrange multipliers is implemented to perform a unitary transformation of the wave functions in the middle of the vacuum region. The method enables (i) reduction of the number of contributing tip-substrate wave function combinations used in the corresponding transmission matrix, and (ii) to bundle up wave functions with similar symmetry in the lateral plane, so that (iii) an intuitive understanding of the STM contrast can be achieved. The theory is applied to a CO dimer adsorbed on a Cu(1 1 1) surface scanned by a single-atom Cu tip, whose STM image is discussed in detail by the outlined method.
NASA Astrophysics Data System (ADS)
Li, Hong; Zhang, Li; Jiao, Yong-Chang
2016-07-01
This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Praturi, Divya Sri; Girimaji, Sharath
2017-11-01
Nonlinear spectral energy transfer by triadic interactions is one of the foundational processes in fluid turbulence. Much of our current knowledge of this process is contingent upon pressure being a Lagrange multiplier with the only function of re-orienting the velocity wave vector. In this study, we examine how the nonlinear spectral transfer is affected in compressible turbulence when pressure is a true thermodynamic variable with a wave character. We perform direct numerical simulations of multi-mode evolution at different turbulent Mach numbers of Mt = 0.03 , 0.6 . Simulations are performed with initial modes that are fully solenoidal, fully dilatational and mixed solenoidal-dilatational. It is shown that solenoidal-solenoidal interactions behave in canonical manner at all Mach numbers. However, dilatational and mixed mode interactions are profoundly different. This is due to the fact that wave-pressure leads to kinetic-internal energy exchange via the pressure-dilatation mechanism. An important consequence of this exchange is that the triple correlation term, responsible for spectral transfer, experiences non-monotonic behavior resulting in inefficient energy transfer to other modes.
Gain degradation and efficiencies of spiral electron multipliers
NASA Technical Reports Server (NTRS)
Judge, R. J. R.; Palmer, D. A.
1973-01-01
The characteristics of spiral electron multipliers as functions of accumulated counts were investigated. The mean gain of the multipliers showed a steady decline from about 100 million when new, to about one million after 100 billion events when biased in a saturation mode. For prolonged use in a space environment, improved life expectancy might be obtained with a varying bias voltage adjusted to maintain the gain comfortably above a given discrimination level. Pulse-height distributions at various stages of the lifetime and variations of efficiency with energy of detected electrons are presented.
NASA Astrophysics Data System (ADS)
Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.
2017-06-01
The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.
Mass-dependent channel electron multiplier operation. [for ion detection
NASA Technical Reports Server (NTRS)
Fields, S. A.; Burch, J. L.; Oran, W. A.
1977-01-01
The absolute counting efficiency and pulse height distributions of a continuous-channel electron multiplier used in the detection of hydrogen, argon and xenon ions are assessed. The assessment technique, which involves the post-acceleration of 8-eV ion beams to energies from 100 to 4000 eV, provides information on counting efficiency versus post-acceleration voltage characteristics over a wide range of ion mass. The charge pulse height distributions for H2 (+), A (+) and Xe (+) were measured by operating the experimental apparatus in a marginally gain-saturated mode. It was found that gain saturation occurs at lower channel multiplier operating voltages for light ions such as H2 (+) than for the heavier ions A (+) and Xe (+), suggesting that the technique may be used to discriminate between these two classes of ions in electrostatic analyzers.
NASA Technical Reports Server (NTRS)
Beichman, C.; Gomez, G.; Lo, M.; Masdemont, J.; Romans, L.
2002-01-01
In this paper, we describe the mission design for TPF assuming a distributed spacecraft concept using formation flight around both a halo orbit around L2 as well as a heliocentric orbit. Although the mission architecture is still under study, the next two years will include study of four design cncepts and a downselect to two concepts around 2005.
Integración automatizada de las ecuaciones de Lagrange en el movimiento orbital.
NASA Astrophysics Data System (ADS)
Abad, A.; San Juan, J. F.
The new techniques of algebraic manipulation, especially the Poisson Series Processor, permit the analytical integration of the more and more complex problems of celestial mechanics. The authors are developing a new Poisson Series Processor, PSPC, and they use it to solve the Lagrange equation of the orbital motion. They integrate the Lagrange equation by using the stroboscopic method, and apply it to the main problem of the artificial satellite theory.
1. EXTERIOR VIEW OF 209 WARE STREET LOOKING SOUTH. THIS ...
1. EXTERIOR VIEW OF 209 WARE STREET LOOKING SOUTH. THIS STRUCTURE WAS ONE OF APPROXIMATELY SEVENTEEN DUPLEXES BUILT AS THE ORIGINAL WORKER HOUSING FOR THE LaGRANGE COTTON MILLS, LATER KNOWN AS CALUMET MILL. LaGRANGE MILLS (1888-89) WAS THE FIRST COTTON MILL IN LaGRANGE. NOTE THE GABLE-ON-HIP ROOF FORM AND TWO IDENTICAL STRUCTURES VISIBLE TO THE LEFT. - 209 Ware Street (House), 209 Ware Street, La Grange, Troup County, GA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook
2015-03-07
We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal tomore » 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.« less
Variational Integrators for Interconnected Lagrange-Dirac Systems
NASA Astrophysics Data System (ADS)
Parks, Helen; Leok, Melvin
2017-10-01
Interconnected systems are an important class of mathematical models, as they allow for the construction of complex, hierarchical, multiphysics, and multiscale models by the interconnection of simpler subsystems. Lagrange-Dirac mechanical systems provide a broad category of mathematical models that are closed under interconnection, and in this paper, we develop a framework for the interconnection of discrete Lagrange-Dirac mechanical systems, with a view toward constructing geometric structure-preserving discretizations of interconnected systems. This work builds on previous work on the interconnection of continuous Lagrange-Dirac systems (Jacobs and Yoshimura in J Geom Mech 6(1):67-98, 2014) and discrete Dirac variational integrators (Leok and Ohsawa in Found Comput Math 11(5), 529-562, 2011). We test our results by simulating some of the continuous examples given in Jacobs and Yoshimura (2014).
Revisiting the Tale of Hercules: How Stars Orbiting the Lagrange Points Visit the Sun
NASA Astrophysics Data System (ADS)
Pérez-Villegas, Angeles; Portail, Matthieu; Wegg, Christopher; Gerhard, Ortwin
2017-05-01
We propose a novel explanation for the Hercules stream consistent with recent measurements of the extent and pattern speed of the Galactic bar. We have adapted a made-to-measure dynamical model tailored for the Milky Way to investigate the kinematics of the solar neighborhood (SNd). The model matches the 3D density of the red clump giant stars (RCGs) in the bulge and bar as well as stellar kinematics in the inner Galaxy, with a pattern speed of 39 km s-1 kpc-1. Cross-matching this model with the Gaia DR1 TGAS data combined with RAVE and LAMOST radial velocities, we find that the model naturally predicts a bimodality in the U-V-velocity distribution for nearby stars which is in good agreement with the Hercules stream. In the model, the Hercules stream is made of stars orbiting the Lagrange points of the bar which move outward from the bar’s corotation radius to visit the SNd. While the model is not yet a quantitative fit of the velocity distribution, the new picture naturally predicts that the Hercules stream is more prominent inward from the Sun and nearly absent only a few 100 pc outward of the Sun, and plausibly explains that Hercules is prominent in old and metal-rich stars.
1. STREETSCAPE VIEW OF 208 VINE STREET (FIRST HOUSE ON ...
1. STREETSCAPE VIEW OF 208 VINE STREET (FIRST HOUSE ON RIGHT) LOOKING WEST. THIS STRUCTURE WAS ONE OF APPROXIMATELY SEVENTEEN DUPLEXES BUILT AS THE ORIGINAL WORKER HOUSING FOR THE LaGRANGE COTTON MILLS, LATER KNOWN AS CALUMET MILL. LaGRANGE MILLS (1888-89) WAS THE FIRST COTTON MILL IN LaGRANGE. NOTE THE GABLE-ON-HIP ROOF FORM AND IDENTICAL STRUCTURES FACING EACH OTHER ALONG BOTH SIDES OF THE NARROW STREET. - 208 Vine Street (House), 208 Vine Street, La Grange, Troup County, GA
3D Bragg coherent diffractive imaging of five-fold multiply twinned gold nanoparticle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jong Woo; Ulvestad, Andrew; Manna, Sohini
The formation mechanism of five-fold multiply twinned nanoparticles has been a long-term topic because of their geometrical incompatibility. So, various models have been proposed to explain how the internal structure of the multiply twinned nanoparticles accommodates the constraints of the solid-angle deficiency. Here, we investigate the internal structure, strain field and strain energy density of 600 nm sized five-fold multiply twinned gold nanoparticles quantitatively using Bragg coherent diffractive imaging, which is suitable for the study of buried defects and three-dimensional strain distribution with great precision. Our study reveals that the strain energy density in five-fold multiply twinned gold nanoparticles ismore » an order of magnitude higher than that of the single nanocrystals such as an octahedron and triangular plate synthesized under the same conditions. This result indicates that the strain developed while accommodating an angular misfit, although partially released through the introduction of structural defects, is still large throughout the crystal.« less
3D Bragg coherent diffractive imaging of five-fold multiply twinned gold nanoparticle
Kim, Jong Woo; Ulvestad, Andrew; Manna, Sohini; ...
2017-08-11
The formation mechanism of five-fold multiply twinned nanoparticles has been a long-term topic because of their geometrical incompatibility. So, various models have been proposed to explain how the internal structure of the multiply twinned nanoparticles accommodates the constraints of the solid-angle deficiency. Here, we investigate the internal structure, strain field and strain energy density of 600 nm sized five-fold multiply twinned gold nanoparticles quantitatively using Bragg coherent diffractive imaging, which is suitable for the study of buried defects and three-dimensional strain distribution with great precision. Our study reveals that the strain energy density in five-fold multiply twinned gold nanoparticles ismore » an order of magnitude higher than that of the single nanocrystals such as an octahedron and triangular plate synthesized under the same conditions. This result indicates that the strain developed while accommodating an angular misfit, although partially released through the introduction of structural defects, is still large throughout the crystal.« less
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2005-01-01
We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.
How to use the Sun-Earth Lagrange points for fundamental physics and navigation
NASA Astrophysics Data System (ADS)
Tartaglia, A.; Lorenzini, E. C.; Lucchesi, D.; Pucacco, G.; Ruggiero, M. L.; Valko, P.
2018-01-01
We illustrate the proposal, nicknamed LAGRANGE, to use spacecraft, located at the Sun-Earth Lagrange points, as a physical reference frame. Performing time of flight measurements of electromagnetic signals traveling on closed paths between the points, we show that it would be possible: (a) to refine gravitational time delay knowledge due both to the Sun and the Earth; (b) to detect the gravito-magnetic frame dragging of the Sun, so deducing information about the interior of the star; (c) to check the possible existence of a galactic gravitomagnetic field, which would imply a revision of the properties of a dark matter halo; (d) to set up a relativistic positioning and navigation system at the scale of the inner solar system. The paper presents estimated values for the relevant quantities and discusses the feasibility of the project analyzing the behavior of the space devices close to the Lagrange points.
Xiao, Qiang; Zeng, Zhigang
2017-10-01
The existed results of Lagrange stability and finite-time synchronization for memristive recurrent neural networks (MRNNs) are scale-free on time evolvement, and some restrictions appear naturally. In this paper, two novel scale-limited comparison principles are established by means of inequality techniques and induction principle on time scales. Then the results concerning Lagrange stability and global finite-time synchronization of MRNNs on time scales are obtained. Scaled-limited Lagrange stability criteria are derived, in detail, via nonsmooth analysis and theory of time scales. Moreover, novel criteria for achieving the global finite-time synchronization are acquired. In addition, the derived method can also be used to study global finite-time stabilization. The proposed results extend or improve the existed ones in the literatures. Two numerical examples are chosen to show the effectiveness of the obtained results.
Analytical Dynamics and Nonrigid Spacecraft Simulation
NASA Technical Reports Server (NTRS)
Likins, P. W.
1974-01-01
Application to the simulation of idealized spacecraft are considered both for multiple-rigid-body models and for models consisting of combination of rigid bodies and elastic bodies, with the elastic bodies being defined either as continua, as finite-element systems, or as a collection of given modal data. Several specific examples are developed in detail by alternative methods of analytical mechanics, and results are compared to a Newton-Euler formulation. The following methods are developed from d'Alembert's principle in vector form: (1) Lagrange's form of d'Alembert's principle for independent generalized coordinates; (2) Lagrange's form of d'Alembert's principle for simply constrained systems; (3) Kane's quasi-coordinate formulation of D'Alembert's principle; (4) Lagrange's equations for independent generalized coordinates; (5) Lagrange's equations for simply constrained systems; (6) Lagrangian quasi-coordinate equations (or the Boltzmann-Hamel equations); (7) Hamilton's equations for simply constrained systems; and (8) Hamilton's equations for independent generalized coordinates.
TaiWan Ionospheric Model (TWIM) prediction based on time series autoregressive analysis
NASA Astrophysics Data System (ADS)
Tsai, L. C.; Macalalad, Ernest P.; Liu, C. H.
2014-10-01
As described in a previous paper, a three-dimensional ionospheric electron density (Ne) model has been constructed from vertical Ne profiles retrieved from the FormoSat3/Constellation Observing System for Meteorology, Ionosphere, and Climate GPS radio occultation measurements and worldwide ionosonde foF2 and foE data and named the TaiWan Ionospheric Model (TWIM). The TWIM exhibits vertically fitted α-Chapman-type layers with distinct F2, F1, E, and D layers, and surface spherical harmonic approaches for the fitted layer parameters including peak density, peak density height, and scale height. To improve the TWIM into a real-time model, we have developed a time series autoregressive model to forecast short-term TWIM coefficients. The time series of TWIM coefficients are considered as realizations of stationary stochastic processes within a processing window of 30 days. These autocorrelation coefficients are used to derive the autoregressive parameters and then forecast the TWIM coefficients, based on the least squares method and Lagrange multiplier technique. The forecast root-mean-square relative TWIM coefficient errors are generally <30% for 1 day predictions. The forecast TWIM values of foE and foF2 values are also compared and evaluated using worldwide ionosonde data.
Sherman, H; Nguyen, A V; Bruckard, W
2016-11-22
Atomic force microscopy makes it possible to measure the interacting forces between individual colloidal particles and air bubbles, which can provide a measure of the particle hydrophobicity. To indicate the level of hydrophobicity of the particle, the contact angle can be calculated, assuming that no interfacial deformation occurs with the bubble retaining a spherical profile. Our experimental results obtained using a modified sphere tensiometry apparatus to detach submillimeter spherical particles show that deformation of the bubble interface does occur during particle detachment. We also develop a theoretical model to describe the equilibrium shape of the bubble meniscus at any given particle position, based on the minimization of the free energy of the system. The developed model allows us to analyze high-speed video captured during detachment. In the system model deformation of the bubble profile is accounted for by the incorporation of a Lagrange multiplier into both the Young-Laplace equation and the force balance. The solution of the bubble profile matched to the high-speed video allows us to accurately calculate the contact angle and determine the total force balance as a function of the contact point of the bubble on the particle surface.
Canonical quantization of constrained systems and coadjoint orbits of Diff(S sup 1 )
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherer, W.M.
It is shown that Dirac's treatment of constrained Hamiltonian systems and Schwinger's action principle quantization lead to identical commutations relations. An explicit relation between the Lagrange multipliers in the action principle approach and the additional terms in the Dirac bracket is derived. The equivalence of the two methods is demonstrated in the case of the non-linear sigma model. Dirac's method is extended to superspace and this extension is applied to the chiral superfield. The Dirac brackets of the massive interacting chiral superfluid are derived and shown to give the correct commutation relations for the component fields. The Hamiltonian of themore » theory is given and the Hamiltonian equations of motion are computed. They agree with the component field results. An infinite sequence of differential operators which are covariant under the coadjoint action of Diff(S{sup 1}) and analogues to Hill's operator is constructed. They map conformal fields of negative integer and half-integer weight to their dual space. Some properties of these operators are derived and possible applications are discussed. The Korteweg-de Vries equation is formulated as a coadjoint orbit of Diff(S{sup 1}).« less
Antigravity in F( R) and Brans-Dicke theories
NASA Astrophysics Data System (ADS)
Oikonomou, V. K.; Karagiannakis, N.
2014-12-01
We study antigravity in F( R)-theory originating scalar-tensor theories and also in Brans-Dicke models without cosmological constant. For the F( R) theory case, we obtain the Jordan frame antigravity scalar-tensor theory by using a variant of the Lagrange multipliers method and we numerically study the time dependent effective gravitational constant. As we shall demonstrate in detail by using some viable F( R) models, although the initial F( R) models have no antigravity, their scalar-tensor counterpart theories might or not have antigravity, a fact mainly depending on the parameter that characterizes antigravity. Similar results hold true in the Brans-Dicke model, which we also studied numerically. In addition, regarding the Brans-Dicke model we also found some analytic cosmological solutions. Since antigravity is an unwanted feature in gravitational theories, our findings suggest that in the case of F( R) theories, antigravity does not occur in the real world described by the F( R) theory, but might occur in the Jordan frame scalar-tensor counterpart of the F( R) theory, and this happens under certain circumstances. The central goal of our study is to present all different cases in which antigravity might occur in modified gravity models.
NASA Technical Reports Server (NTRS)
Nash, Stephen G.; Polyak, R.; Sofer, Ariela
1994-01-01
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
Hamilton's principle and normal mode coupling in an aspherical planet with a fluid core
NASA Astrophysics Data System (ADS)
Al-Attar, David; Crawford, Ophelia; Valentine, Andrew P.; Trampert, Jeannot
2018-04-01
We apply Hamilton's principle to obtain the exact equations of motion for an elastic planet that is rotating, self-gravitating, and comprises both fluid and solid regions. This variational problem is complicated by the occurrence of tangential slip at fluid-solid boundaries, but we show how this can be accommodated both directly and using the method of Lagrange multipliers. A novelty of our approach is that the planet's motion is described relative to an arbitrary reference configuration, with this generality offering advantages for numerical calculations. In particular, aspherical topography on the free surface or internal boundaries of the planet's equilibrium configuration can be converted exactly into effective volumetric heterogeneities within a geometrically spherical reference body by applying a suitable particle relabelling transformation. The theory is then specialised to consider the linearised motion of a planet about a steadily rotating equilibrium configuration, with these results having applications to normal mode coupling calculations used within studies of long period seismology, tidal deformation, and related fields. In particular, we explain how our new theory will, for the first time, allow aspherical boundary topography to be incorporated exactly within such coupling calculations.
Full-degrees-of-freedom frequency based substructuring
NASA Astrophysics Data System (ADS)
Drozg, Armin; Čepon, Gregor; Boltežar, Miha
2018-01-01
Dividing the whole system into multiple subsystems and a separate dynamic analysis is common practice in the field of structural dynamics. The substructuring process improves the computational efficiency and enables an effective realization of the local optimization, modal updating and sensitivity analyses. This paper focuses on frequency-based substructuring methods using experimentally obtained data. An efficient substructuring process has already been demonstrated using numerically obtained frequency-response functions (FRFs). However, the experimental process suffers from several difficulties, among which, many of them are related to the rotational degrees of freedom. Thus, several attempts have been made to measure, expand or combine numerical correction methods in order to obtain a complete response model. The proposed methods have numerous limitations and are not yet generally applicable. Therefore, in this paper an alternative approach based on experimentally obtained data only, is proposed. The force-excited part of the FRF matrix is measured with piezoelectric translational and rotational direct accelerometers. The incomplete moment-excited part of the FRF matrix is expanded, based on the modal model. The proposed procedure is integrated in a Lagrange Multiplier Frequency Based Substructuring method and demonstrated on a simple beam structure, where the connection coordinates are mainly associated with the rotational degrees of freedom.
Open Group Transformations Within the Sp(2)-Formalism
NASA Astrophysics Data System (ADS)
Batalin, Igor; Marnelius, Robert
Previously we have shown that open groups whose generators are in arbitrary involutions may be quantized within a ghost extended framework in terms of the nilpotent BFV-BRST charge operator. Here we show that they may also be quantized within an Sp(2)-frame in which there are two odd anticommuting operators called Sp(2)-charges. Previous results for finite open group transformations are generalized to the Sp(2)-formalism. We show that in order to define open group transformations on the whole ghost extended space we need Sp(2)-charges in the nonminimal sector which contains dynamical Lagrange multipliers. We give an Sp(2)-version of the quantum master equation with extended Sp(2)-charges and a master charge of a more involved form, which is proposed to represent the integrability conditions of defining operators of connection operators and which therefore should encode the generalized quantum Maurer-Cartan equations for arbitrary open groups. General solutions of this master equation are given in explicit form. A further extended Sp(2)-formalism is proposed in which the group parameters are quadrupled to a supersymmetric set and from which all results may be derived.
Charge redistribution in QM:QM ONIOM model systems: a constrained density functional theory approach
NASA Astrophysics Data System (ADS)
Beckett, Daniel; Krukau, Aliaksandr; Raghavachari, Krishnan
2017-11-01
The ONIOM hybrid method has found considerable success in QM:QM studies designed to approximate a high level of theory at a significantly reduced cost. This cost reduction is achieved by treating only a small model system with the target level of theory and the rest of the system with a low, inexpensive, level of theory. However, the choice of an appropriate model system is a limiting factor in ONIOM calculations and effects such as charge redistribution across the model system boundary must be considered as a source of error. In an effort to increase the general applicability of the ONIOM model, a method to treat the charge redistribution effect is developed using constrained density functional theory (CDFT) to constrain the charge experienced by the model system in the full calculation to the link atoms in the truncated model system calculations. Two separate CDFT-ONIOM schemes are developed and tested on a set of 20 reactions with eight combinations of levels of theory. It is shown that a scheme using a scaled Lagrange multiplier term obtained from the low-level CDFT model calculation outperforms ONIOM at each combination of levels of theory from 32% to 70%.
Discrete Inverse and State Estimation Problems
NASA Astrophysics Data System (ADS)
Wunsch, Carl
2006-06-01
The problems of making inferences about the natural world from noisy observations and imperfect theories occur in almost all scientific disciplines. This book addresses these problems using examples taken from geophysical fluid dynamics. It focuses on discrete formulations, both static and time-varying, known variously as inverse, state estimation or data assimilation problems. Starting with fundamental algebraic and statistical ideas, the book guides the reader through a range of inference tools including the singular value decomposition, Gauss-Markov and minimum variance estimates, Kalman filters and related smoothers, and adjoint (Lagrange multiplier) methods. The final chapters discuss a variety of practical applications to geophysical flow problems. Discrete Inverse and State Estimation Problems is an ideal introduction to the topic for graduate students and researchers in oceanography, meteorology, climate dynamics, and geophysical fluid dynamics. It is also accessible to a wider scientific audience; the only prerequisite is an understanding of linear algebra. Provides a comprehensive introduction to discrete methods of inference from incomplete information Based upon 25 years of practical experience using real data and models Develops sequential and whole-domain analysis methods from simple least-squares Contains many examples and problems, and web-based support through MIT opencourseware
NASA Astrophysics Data System (ADS)
Rangarajan, Ramsharan; Gao, Huajian
2015-09-01
We introduce a finite element method to compute equilibrium configurations of fluid membranes, identified as stationary points of a curvature-dependent bending energy functional under certain geometric constraints. The reparameterization symmetries in the problem pose a challenge in designing parametric finite element methods, and existing methods commonly resort to Lagrange multipliers or penalty parameters. In contrast, we exploit these symmetries by representing solution surfaces as normal offsets of given reference surfaces and entirely bypass the need for artificial constraints. We then resort to a Galerkin finite element method to compute discrete C1 approximations of the normal offset coordinate. The variational framework presented is suitable for computing deformations of three-dimensional membranes subject to a broad range of external interactions. We provide a systematic algorithm for computing large deformations, wherein solutions at subsequent load steps are identified as perturbations of previously computed ones. We discuss the numerical implementation of the method in detail and demonstrate its optimal convergence properties using examples. We discuss applications of the method to studying adhesive interactions of fluid membranes with rigid substrates and to investigate the influence of membrane tension in tether formation.
NASA Astrophysics Data System (ADS)
Mercier, Sylvain; Gratton, Serge; Tardieu, Nicolas; Vasseur, Xavier
2017-12-01
Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method. We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices.
Nonlinear Analysis of Bonded Composite Tubular Lap Joints
NASA Technical Reports Server (NTRS)
Oterkus, E.; Madenci, E.; Smeltzer, S. S., III; Ambur, D. R.
2005-01-01
The present study describes a semi-analytical solution method for predicting the geometrically nonlinear response of a bonded composite tubular single-lap joint subjected to general loading conditions. The transverse shear and normal stresses in the adhesive as well as membrane stress resultants and bending moments in the adherends are determined using this method. The method utilizes the principle of virtual work in conjunction with nonlinear thin-shell theory to model the adherends and a cylindrical shear lag model to represent the kinematics of the thin adhesive layer between the adherends. The kinematic boundary conditions are imposed by employing the Lagrange multiplier method. In the solution procedure, the displacement components for the tubular joint are approximated in terms of non-periodic and periodic B-Spline functions in the longitudinal and circumferential directions, respectively. The approach presented herein represents a rapid-solution alternative to the finite element method. The solution method was validated by comparison against a previously considered tubular single-lap joint. The steep variation of both peeling and shearing stresses near the adhesive edges was successfully captured. The applicability of the present method was also demonstrated by considering tubular bonded lap-joints subjected to pure bending and torsion.
Merits and limitations of optimality criteria method for structural optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo
1993-01-01
The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.
Ultrasound guided electrical impedance tomography for 2D free-interface reconstruction
NASA Astrophysics Data System (ADS)
Liang, Guanghui; Ren, Shangjie; Dong, Feng
2017-07-01
The free-interface detection problem is normally seen in industrial or biological processes. Electrical impedance tomography (EIT) is a non-invasive technique with advantages of high-speed and low cost, and is a promising solution for free-interface detection problems. However, due to the ill-posed and nonlinear characteristics, the spatial resolution of EIT is low. To deal with the issue, an ultrasound guided EIT is proposed to directly reconstruct the geometric configuration of the target free-interface. In the method, the position of the central point of the target interface is measured by a pair of ultrasound transducers mounted at the opposite side of the objective domain, and then the position measurement is used as the prior information for guiding the EIT-based free-interface reconstruction. During the process, a constrained least squares framework is used to fuse the information from different measurement modalities, and the Lagrange multiplier-based Levenberg-Marquardt method is adopted to provide the iterative solution of the constraint optimization problem. The numerical results show that the proposed ultrasound guided EIT method for the free-interface reconstruction is more accurate than the single modality method, especially when the number of valid electrodes is limited.
Strongly Coupled Fluid-Body Dynamics in the Immersed Boundary Projection Method
NASA Astrophysics Data System (ADS)
Wang, Chengjie; Eldredge, Jeff D.
2014-11-01
A computational algorithm is developed to simulate dynamically coupled interaction between fluid and rigid bodies. The basic computational framework is built upon a multi-domain immersed boundary method library, whirl, developed in previous work. In this library, the Navier-Stokes equations for incompressible flow are solved on a uniform Cartesian grid by the vorticity-based immersed boundary projection method of Colonius and Taira. A solver for the dynamics of rigid-body systems is also included. The fluid and rigid-body solvers are strongly coupled with an iterative approach based on the block Gauss-Seidel method. Interfacial force, with its intimate connection with the Lagrange multipliers used in the fluid solver, is used as the primary iteration variable. Relaxation, developed from a stability analysis of the iterative scheme, is used to achieve convergence in only 2-4 iterations per time step. Several two- and three-dimensional numerical tests are conducted to validate and demonstrate the method, including flapping of flexible wings, self-excited oscillations of a system of linked plates and three-dimensional propulsion of flexible fluked tail. This work has been supported by AFOSR, under Award FA9550-11-1-0098.
Constrained Low-Rank Learning Using Least Squares-Based Regularization.
Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong
2017-12-01
Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.
Development of Coriolis mass flowmeter with digital drive and signal processing technology.
Hou, Qi-Li; Xu, Ke-Jun; Fang, Min; Liu, Cui; Xiong, Wen-Jun
2013-09-01
Coriolis mass flowmeter (CMF) often suffers from two-phase flowrate which may cause flowtube stalling. To solve this problem, a digital drive method and a digital signal processing method of CMF is studied and implemented in this paper. A positive-negative step signal is used to initiate the flowtube oscillation without knowing the natural frequency of the flowtube. A digital zero-crossing detection method based on Lagrange interpolation is adopted to calculate the frequency and phase difference of the sensor output signals in order to synthesize the digital drive signal. The digital drive approach is implemented by a multiplying digital to analog converter (MDAC) and a direct digital synthesizer (DDS). A digital Coriolis mass flow transmitter is developed with a digital signal processor (DSP) to control the digital drive, and realize the signal processing. Water flow calibrations and gas-liquid two-phase flowrate experiments are conducted to examine the performance of the transmitter. The experimental results show that the transmitter shortens the start-up time and can maintain the oscillation of flowtube in two-phase flowrate condition. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh, Inderjeet; Singh, Bhajan; Sandhu, B. S.; Sabharwal, Arvind D.
2017-04-01
A method has been presented for calculation of effective atomic number (Zeff) of composite materials, by using back-scattering of 662 keV gamma photons obtained from a 137Cs mono-energetic radioactive source. The present technique is a non-destructive approach, and is employed to evaluate Zeff of different composite materials, by interacting gamma photons with semi-infinite material in a back-scattering geometry, using a 3″ × 3″ NaI(Tl) scintillation detector. The present work is undertaken to study the effect of target thickness on intensity distribution of gamma photons which are multiply back-scattered from targets (pure elements) and composites (mixtures of different elements). The intensity of multiply back-scattered events increases with increasing target thickness and finally saturates. The saturation thickness for multiply back-scattered events is used to assign a number (Zeff) for multi-element materials. Response function of the 3″ × 3″ NaI(Tl) scintillation detector is applied on observed pulse-height distribution to include the contribution of partially absorbed photons. The reduced value of signal-to-noise ratio interprets the increase in multiply back-scattered data of a response corrected spectrum. Data obtained from Monte Carlo simulations and literature also support the present experimental results.
Centrifuge Rotor Models: A Comparison of the Euler-Lagrange and the Bond Graph Modeling Approach
NASA Technical Reports Server (NTRS)
Granda, Jose J.; Ramakrishnan, Jayant; Nguyen, Louis H.
2006-01-01
A viewgraph presentation on centrifuge rotor models with a comparison using Euler-Lagrange and bond graph methods is shown. The topics include: 1) Objectives; 2) MOdeling Approach Comparisons; 3) Model Structures; and 4) Application.
On the commutator of C^{\\infty}} -symmetries and the reduction of Euler-Lagrange equations
NASA Astrophysics Data System (ADS)
Ruiz, A.; Muriel, C.; Olver, P. J.
2018-04-01
A novel procedure to reduce by four the order of Euler-Lagrange equations associated to nth order variational problems involving single variable integrals is presented. In preparation, a new formula for the commutator of two \
Dirac structures in vakonomic mechanics
NASA Astrophysics Data System (ADS)
Jiménez, Fernando; Yoshimura, Hiroaki
2015-08-01
In this paper, we explore dynamics of the nonholonomic system called vakonomic mechanics in the context of Lagrange-Dirac dynamical systems using a Dirac structure and its associated Hamilton-Pontryagin variational principle. We first show the link between vakonomic mechanics and nonholonomic mechanics from the viewpoints of Dirac structures as well as Lagrangian submanifolds. Namely, we clarify that Lagrangian submanifold theory cannot represent nonholonomic mechanics properly, but vakonomic mechanics instead. Second, in order to represent vakonomic mechanics, we employ the space TQ ×V∗, where a vakonomic Lagrangian is defined from a given Lagrangian (possibly degenerate) subject to nonholonomic constraints. Then, we show how implicit vakonomic Euler-Lagrange equations can be formulated by the Hamilton-Pontryagin variational principle for the vakonomic Lagrangian on the extended Pontryagin bundle (TQ ⊕T∗ Q) ×V∗. Associated with this variational principle, we establish a Dirac structure on (TQ ⊕T∗ Q) ×V∗ in order to define an intrinsic vakonomic Lagrange-Dirac system. Furthermore, we also establish another construction for the vakonomic Lagrange-Dirac system using a Dirac structure on T∗ Q ×V∗, where we introduce a vakonomic Dirac differential. Finally, we illustrate our theory of vakonomic Lagrange-Dirac systems by some examples such as the vakonomic skate and the vertical rolling coin.
NASA Astrophysics Data System (ADS)
Tarasov, V. F.
In the present paper exact formulae for the calculation of zeros of Rnl(r) and 1F1(-a c; z), where z = 2 λ r, a = n - l - 1 >= 0 and c = 2l + 2 >= 2 are presented. For a <= 4 the method due to Tartallia and Cardono, and that due to L. Ferrai, L. Euler and J. L. Lagrange are used. In other cases (a > 4) numerical methods are employed to obtain the results (to within 10-15). For greater geometrical obviousness of the irregulary distribution (as a > 3) of zeros xk = zk - (c + a - 1) on the axis y = 0, the circular diagrams with the radius Ra = (a - 1) √ {c + a - 1} are presented for the first time. It is possible to notice some singularities of distribution of these zeros and their images - the points Tk - on the circle. For a = 3 and 4 their exact ``angle'' asymptotics (as c --> ∞) are obtained. It is shown that in the basis of the L. Ferrari, L. Euler and J.-L. Lagrange methods, using for solving the equation 1F1(-4 c; z) = 0, one
78 FR 43821 - Final Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-22
............ +902 Unincorporated Areas of LaGrange County. Big Long Lake Entire shoreline......... +957 Unincorporated Areas of LaGrange County. Big Turkey Lake Entire shoreline within +932 Unincorporated Areas of... Vertical Datum. + North American Vertical Datum. Depth in feet above ground. [caret] Mean Sea Level...
NASA Astrophysics Data System (ADS)
Wilkinson, Michael; Grant, John
2018-03-01
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \
2013-08-01
release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by
Statistics of multiply scattered broadband terahertz pulses.
Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M
2003-07-25
We describe the first measurements of the diffusion of broadband single-cycle optical pulses through a highly scattering medium. Using terahertz time-domain spectroscopy, we measure the electric field of a multiply scattered wave with a time resolution shorter than one optical cycle. This time-domain measurement provides information on the statistics of both the amplitude and phase distributions of the diffusive wave. We develop a theoretical description, suitable for broadband radiation, which adequately describes the experimental results.
Maximum Renyi entropy principle for systems with power-law Hamiltonians.
Bashkirov, A G
2004-09-24
The Renyi distribution ensuring the maximum of Renyi entropy is investigated for a particular case of a power-law Hamiltonian. Both Lagrange parameters alpha and beta can be eliminated. It is found that beta does not depend on a Renyi parameter q and can be expressed in terms of an exponent kappa of the power-law Hamiltonian and an average energy U. The Renyi entropy for the resulting Renyi distribution reaches its maximal value at q=1/(1+kappa) that can be considered as the most probable value of q when we have no additional information on the behavior of the stochastic process. The Renyi distribution for such q becomes a power-law distribution with the exponent -(kappa+1). When q=1/(1+kappa)+epsilon (0
NASA Astrophysics Data System (ADS)
Koshkarbayev, Nurbol; Kanguzhin, Baltabek
2017-09-01
In this paper we study the question on the full description of well-posed restrictions of given maximal differential operator on a tree-graph. Lagrange formula for differential operator on a tree with Kirchhoff conditions at its internal vertices is presented.
ERIC Educational Resources Information Center
Lovell, M.S.
2007-01-01
This paper presents a derivation of all five Lagrange points by methods accessible to sixth-form students, and provides a further opportunity to match Newtonian gravity with centripetal force. The predictive powers of good scientific theories are also discussed with regard to the philosophy of science. Methods for calculating the positions of the…
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg; Saar, Martin O.
2016-10-01
We present an extended law of mass-action (xLMA) method for multiphase equilibrium calculations and apply it in the context of reactive transport modeling. This extended LMA formulation differs from its conventional counterpart in that (i) it is directly derived from the Gibbs energy minimization (GEM) problem (i.e., the fundamental problem that describes the state of equilibrium of a chemical system under constant temperature and pressure); and (ii) it extends the conventional mass-action equations with Lagrange multipliers from the Gibbs energy minimization problem, which can be interpreted as stability indices of the chemical species. Accounting for these multipliers enables the method to determine all stable phases without presuming their types (e.g., aqueous, gaseous) or their presence in the equilibrium state. Therefore, the here proposed xLMA method inherits traits of Gibbs energy minimization algorithms that allow it to naturally detect the phases present in equilibrium, which can be single-component phases (e.g., pure solids or liquids) or non-ideal multi-component phases (e.g., aqueous, melts, gaseous, solid solutions, adsorption, or ion exchange). Moreover, our xLMA method requires no technique that tentatively adds or removes reactions based on phase stability indices (e.g., saturation indices for minerals), since the extended mass-action equations are valid even when their corresponding reactions involve unstable species. We successfully apply the proposed method to a reactive transport modeling problem in which we use PHREEQC and GEMS as alternative backends for the calculation of thermodynamic properties such as equilibrium constants of reactions, standard chemical potentials of species, and activity coefficients. Our tests show that our algorithm is efficient and robust for demanding applications, such as reactive transport modeling, where it converges within 1-3 iterations in most cases. The proposed xLMA method is implemented in Reaktoro, a unified open-source framework for modeling chemically reactive systems.
Bounded state variables and the calculus of variations
NASA Technical Reports Server (NTRS)
Hanafy, L. M.
1972-01-01
An optimal control problem with bounded state variables is transformed into a Lagrange problem by means of differentiable mappings which take some Euclidean space onto the control and state regions. Whereas all such mappings lead to a Lagrange problem, it is shown that only those which are defined as acceptable pairs of transformations are suitable in the sense that solutions to the transformed Lagrange problem will lead to solutions to the original bounded state problem and vice versa. In particular, an acceptable pair of transformations is exhibited for the case when the control and state regions are right parallelepipeds. Finally, a description of the necessary conditions for the bounded state problem which were obtained by this method is given.
A method for characterizing after-pulsing and dark noise of PMTs and SiPMs
NASA Astrophysics Data System (ADS)
Butcher, A.; Doria, L.; Monroe, J.; Retière, F.; Smith, B.; Walding, J.
2017-12-01
Photo-multiplier tubes (PMTs) and silicon photo-multipliers (SiPMs) are detectors sensitive to single photons that are widely used for the detection of scintillation and Cerenkov light in subatomic physics and medical imaging. This paper presents a method for characterizing two of the main noise sources that PMTs and SiPMs share: dark noise and correlated noise (after-pulsing). The proposed method allows for a model-independent measurement of the after-pulsing timing distribution and dark noise rate.
Iodine Plasma Species Measurements in a Hall Effect Thruster Plume
2013-05-01
with an ExB probe , an electrostatic analyzer (ESA), and a combined ESA/ExB probe . The distribution of xenon ions was also measured. Multiply charge...of iodine ions was measured with an ExB probe , an electrostatic analyzer (ESA), and a combined ESA/ExB probe . • Results: – Multiply charged species...Test Hardware – Vacuum test facility (6’ diameter) – Faraday probe (MIT) – ESA, ExB, ESA/ExB Probes (Plasma Controls) – Rotary probe arm (about
Necessary conditions for weighted mean convergence of Lagrange interpolation for exponential weights
NASA Astrophysics Data System (ADS)
Damelin, S. B.; Jung, H. S.; Kwon, K. H.
2001-07-01
Given a continuous real-valued function f which vanishes outside a fixed finite interval, we establish necessary conditions for weighted mean convergence of Lagrange interpolation for a general class of even weights w which are of exponential decay on the real line or at the endpoints of (-1,1).
Visualizing and Understanding the Components of Lagrange and Newton Interpolation
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article takes a close look at Lagrange and Newton interpolation by graphically examining the component functions of each of these formulas. Although interpolation methods are often considered simply to be computational procedures, we demonstrate how the components of the polynomial terms in these formulas provide insight into where these…
Code of Federal Regulations, 2010 CFR
2010-01-01
... load would significantly change the distribution of external or internal loads, this redistribution...) and ultimate loads (limit loads multiplied by prescribed factors of safety). Unless otherwise provided...
Optimized Hypernetted-Chain Solutions for Helium -4 Surfaces and Metal Surfaces
NASA Astrophysics Data System (ADS)
Qian, Guo-Xin
This thesis is a study of inhomogeneous Bose systems such as liquid ('4)He slabs and inhomogeneous Fermi systems such as the electron gas in metal films, at zero temperature. Using a Jastrow-type many-body wavefunction, the ground state energy is expressed by means of Bogoliubov-Born-Green-Kirkwood -Yvon and Hypernetted-Chain techniques. For Bose systems, Euler-Lagrange equations are derived for the one- and two -body functions and systematic approximation methods are physically motivated. It is shown that the optimized variational method includes a self-consistent summation of ladder- and ring-diagrams of conventional many-body theory. For Fermi systems, a linear potential model is adopted to generate the optimized Hartree-Fock basis. Euler-Lagrange equations are derived for the two-body correlations which serve to screen the strong bare Coulomb interaction. The optimization of the pair correlation leads to an expression of correlation energy in which the state averaged RPA part is separated. Numerical applications are presented for the density profile and pair distribution function for both ('4)He surfaces and metal surfaces. Both the bulk and surface energies are calculated in good agreement with experiments.
Dynamical evolution of a fictitious population of binary Neptune Trojans
NASA Astrophysics Data System (ADS)
Brunini, Adrián
2018-03-01
We present numerical simulations of the evolution of a synthetic population of Binary Neptune Trojans, under the influence of the solar perturbations and tidal friction (the so-called Kozai cycles and tidal friction evolution). Our model includes the dynamical influence of the four giant planets on the heliocentric orbit of the binary centre of mass. In this paper, we explore the evolution of initially tight binaries around the Neptune L4 Lagrange point. We found that the variation of the heliocentric orbital elements due to the libration around the Lagrange point introduces significant changes in the orbital evolution of the binaries. Collisional processes would not play a significant role in the dynamical evolution of Neptune Trojans. After 4.5 × 109 yr of evolution, ˜50 per cent of the synthetic systems end up separated as single objects, most of them with slow diurnal rotation rate. The final orbital distribution of the surviving binary systems is statistically similar to the one found for Kuiper Belt Binaries when collisional evolution is not included in the model. Systems composed by a primary and a small satellite are more fragile than the ones composed by components of similar sizes.
An Euler-Lagrange method considering bubble radial dynamics for modeling sonochemical reactors.
Jamshidi, Rashid; Brenner, Gunther
2014-01-01
Unsteady numerical computations are performed to investigate the flow field, wave propagation and the structure of bubbles in sonochemical reactors. The turbulent flow field is simulated using a two-equation Reynolds-Averaged Navier-Stokes (RANS) model. The distribution of the acoustic pressure is solved based on the Helmholtz equation using a finite volume method (FVM). The radial dynamics of a single bubble are considered by applying the Keller-Miksis equation to consider the compressibility of the liquid to the first order of acoustical Mach number. To investigate the structure of bubbles, a one-way coupling Euler-Lagrange approach is used to simulate the bulk medium and the bubbles as the dispersed phase. Drag, gravity, buoyancy, added mass, volume change and first Bjerknes forces are considered and their orders of magnitude are compared. To verify the implemented numerical algorithms, results for one- and two-dimensional simplified test cases are compared with analytical solutions. The results show good agreement with experimental results for the relationship between the acoustic pressure amplitude and the volume fraction of the bubbles. The two-dimensional axi-symmetric results are in good agreement with experimentally observed structure of bubbles close to sonotrode. Copyright © 2013 Elsevier B.V. All rights reserved.
Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part 2; Applications
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Mach, D. M.; Christian, H. J.; Stewart, M. F.; Bateman, M. G.
2005-01-01
The Lagrange multiplier theory and "pitch down method" developed in Part I of this study are applied to complete the calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the method performs well in computer simulations. For mill measurement errors of 1 V/m and a 5 V/m error in the mean fair weather field function, the 3-D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair weather field was also tested using computer simulations. For mill measurement errors of 1 V/m, the method retrieves the 3-D storm field to within an error of about 8% if the fair weather field estimate is typically within 1 V/m of the true fair weather field. Using this side constraint and data from fair weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. The resulting calibration matrix was then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably with the results obtained from earlier calibration analyses that were based on iterative techniques.
Practical global oceanic state estimation
NASA Astrophysics Data System (ADS)
Wunsch, Carl; Heimbach, Patrick
2007-06-01
The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.
Performance analysis of cross-layer design with average PER constraint over MIMO fading channels
NASA Astrophysics Data System (ADS)
Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin
2015-12-01
In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.
Three-dimensional flat shell-to-shell coupling: numerical challenges
NASA Astrophysics Data System (ADS)
Guo, Kuo; Haikal, Ghadir
2017-11-01
The node-to-surface formulation is widely used in contact simulations with finite elements because it is relatively easy to implement using different types of element discretizations. This approach, however, has a number of well-known drawbacks, including locking due to over-constraint when this formulation is used as a twopass method. Most studies on the node-to-surface contact formulation, however, have been conducted using solid elements and little has been done to investigate the effectiveness of this approach for beam or shell elements. In this paper we show that locking can also be observed with the node-to-surface contact formulation when applied to plate and flat shell elements even with a singlepass implementation with distinct master/slave designations, which is the standard solution to locking with solid elements. In our study, we use the quadrilateral four node flat shell element for thin (Kirchhoff-Love) plate and thick (Reissner-Mindlin) plate theory, both in their standard forms and with improved formulations such as the linked interpolation [1] and the Discrete Kirchhoff [2] elements for thick and thin plates, respectively. The Lagrange multiplier method is used to enforce the node-to-surface constraints for all elements. The results show clear locking when compared to those obtained using a conforming mesh configuration.
Feedback stabilization of an oscillating vertical cylinder by POD Reduced-Order Model
NASA Astrophysics Data System (ADS)
Tissot, Gilles; Cordier, Laurent; Noack, Bernd R.
2015-01-01
The objective is to demonstrate the use of reduced-order models (ROM) based on proper orthogonal decomposition (POD) to stabilize the flow over a vertically oscillating circular cylinder in the laminar regime (Reynolds number equal to 60). The 2D Navier-Stokes equations are first solved with a finite element method, in which the moving cylinder is introduced via an ALE method. Since in fluid-structure interaction, the POD algorithm cannot be applied directly, we implemented the fictitious domain method of Glowinski et al. [1] where the solid domain is treated as a fluid undergoing an additional constraint. The POD-ROM is classically obtained by projecting the Navier-Stokes equations onto the first POD modes. At this level, the cylinder displacement is enforced in the POD-ROM through the introduction of Lagrange multipliers. For determining the optimal vertical velocity of the cylinder, a linear quadratic regulator framework is employed. After linearization of the POD-ROM around the steady flow state, the optimal linear feedback gain is obtained as solution of a generalized algebraic Riccati equation. Finally, when the optimal feedback control is applied, it is shown that the flow converges rapidly to the steady state. In addition, a vanishing control is obtained proving the efficiency of the control approach.
Lagrangian approach to the Barrett-Crane spin foam model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonzom, Valentin; Laboratoire de Physique, ENS Lyon, CNRS UMR 5672, 46 Allee d'Italie, 69007 Lyon; Livine, Etera R.
2009-03-15
We provide the Barrett-Crane spin foam model for quantum gravity with a discrete action principle, consisting in the usual BF term with discretized simplicity constraints which in the continuum turn topological BF theory into gravity. The setting is the same as usually considered in the literature: space-time is cut into 4-simplices, the connection describes how to glue these 4-simplices together and the action is a sum of terms depending on the holonomies around each triangle. We impose the discretized simplicity constraints on disjoint tetrahedra and we show how the Lagrange multipliers distort the parallel transport and the correlations between neighboringmore » simplices. We then construct the discretized BF action using a noncommutative * product between SU(2) plane waves. We show how this naturally leads to the Barrett-Crane model. This clears up the geometrical meaning of the model. We discuss the natural generalization of this action principle and the spin foam models it leads to. We show how the recently introduced spin foam fusion coefficients emerge with a nontrivial measure. In particular, we recover the Engle-Pereira-Rovelli spin foam model by weakening the discretized simplicity constraints. Finally, we identify the two sectors of Plebanski's theory and we give the analog of the Barrett-Crane model in the nongeometric sector.« less
Modeling and optimization of dough recipe for breadsticks
NASA Astrophysics Data System (ADS)
Krivosheev, A. Yu; Ponomareva, E. I.; Zhuravlev, A. A.; Lukina, S. I.; Alekhina, N. N.
2018-05-01
During the work, the authors studied the combined effect of non-traditional raw materials on indicators of quality breadsticks, mathematical methods of experiment planning were applied. The main factors chosen were the dosages of flaxseed flour and grape seed oil. The output parameters were the swelling factor of the products and their strength. Optimization of the formulation composition of the dough for bread sticks was carried out by experimental- statistical methods. As a result of the experiment, mathematical models were constructed in the form of regression equations, adequately describing the process of studies. The statistical processing of the experimental data was carried out by the criteria of Student, Cochran and Fisher (with a confidence probability of 0.95). A mathematical interpretation of the regression equations was given. Optimization of the formulation of the dough for bread sticks was carried out by the method of uncertain Lagrange multipliers. The rational values of the factors were determined: the dosage of flaxseed flour - 14.22% and grape seed oil - 7.8%, ensuring the production of products with the best combination of swelling ratio and strength. On the basis of the data obtained, a recipe and a method for the production of breadsticks "Idea" were proposed (TU (Russian Technical Specifications) 9117-443-02068106-2017).
Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin
2016-01-01
Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.
Sabushimike, Donatien; Na, Seung You; Kim, Jin Young; Bui, Ngoc Nam; Seo, Kyung Sik; Kim, Gil Gyeom
2016-01-01
The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM) model) can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA), and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method. PMID:27598159
Nonlinear programming extensions to rational function approximations of unsteady aerodynamics
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1987-01-01
This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.
NASA Astrophysics Data System (ADS)
Utama, Briandhika; Purqon, Acep
2016-08-01
Path Integral is a method to transform a function from its initial condition to final condition through multiplying its initial condition with the transition probability function, known as propagator. At the early development, several studies focused to apply this method for solving problems only in Quantum Mechanics. Nevertheless, Path Integral could also apply to other subjects with some modifications in the propagator function. In this study, we investigate the application of Path Integral method in financial derivatives, stock options. Black-Scholes Model (Nobel 1997) was a beginning anchor in Option Pricing study. Though this model did not successfully predict option price perfectly, especially because its sensitivity for the major changing on market, Black-Scholes Model still is a legitimate equation in pricing an option. The derivation of Black-Scholes has a high difficulty level because it is a stochastic partial differential equation. Black-Scholes equation has a similar principle with Path Integral, where in Black-Scholes the share's initial price is transformed to its final price. The Black-Scholes propagator function then derived by introducing a modified Lagrange based on Black-Scholes equation. Furthermore, we study the correlation between path integral analytical solution and Monte-Carlo numeric solution to find the similarity between this two methods.
Deterministic methods for multi-control fuel loading optimization
NASA Astrophysics Data System (ADS)
Rahman, Fariz B. Abdul
We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.
Image denoising by a direct variational minimization
NASA Astrophysics Data System (ADS)
Janev, Marko; Atanacković, Teodor; Pilipović, Stevan; Obradović, Radovan
2011-12-01
In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image) by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.
Optimal allocation of resources for suppressing epidemic spreading on networks
NASA Astrophysics Data System (ADS)
Chen, Hanshuang; Li, Guofeng; Zhang, Haifeng; Hou, Zhonghuai
2017-07-01
Efficient allocation of limited medical resources is crucial for controlling epidemic spreading on networks. Based on the susceptible-infected-susceptible model, we solve the optimization problem of how best to allocate the limited resources so as to minimize prevalence, providing that the curing rate of each node is positively correlated to its medical resource. By quenched mean-field theory and heterogeneous mean-field (HMF) theory, we prove that an epidemic outbreak will be suppressed to the greatest extent if the curing rate of each node is directly proportional to its degree, under which the effective infection rate λ has a maximal threshold λcopt=1 /
Quantum control of coherent π -electron ring currents in polycyclic aromatic hydrocarbons
NASA Astrophysics Data System (ADS)
Mineo, Hirobumi; Fujimura, Yuichi
2017-12-01
We present results for quantum optimal control (QOC) of the coherent π electron ring currents in polycyclic aromatic hydrocarbons (PAHs). Since PAHs consist of a number of condensed benzene rings, in principle, there exist various coherent ring patterns. These include the ring current localized to a designated benzene ring, the perimeter ring current that flows along the edge of the PAH, and the middle ring current of PAHs having an odd number of benzene rings such as anthracene. In the present QOC treatment, the best target wavefunction for generation of the ring current through a designated path is determined by a Lagrange multiplier method. The target function is integrated into the ordinary QOC theory. To demonstrate the applicability of the QOC procedure, we took naphthalene and anthracene as the simplest examples of linear PAHs. The mechanisms of ring current generation were clarified by analyzing the temporal evolutions of the electronic excited states after coherent excitation by UV pulses or (UV+IR) pulses as well as those of electric fields of the optimal laser pulses. Time-dependent simulations of the perimeter ring current and middle ring current of anthracene, which are induced by analytical electric fields of UV pulsed lasers, were performed to reproduce the QOC results.
NASA Astrophysics Data System (ADS)
Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen
2017-12-01
Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
NASA Astrophysics Data System (ADS)
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P.; Nordsletten, David A.
2014-01-01
The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii–Newton–Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics. PMID:25187672
A Gauss-Newton full-waveform inversion in PML-truncated domains using scalar probing waves
NASA Astrophysics Data System (ADS)
Pakravan, Alireza; Kang, Jun Won; Newtson, Craig M.
2017-12-01
This study considers the characterization of subsurface shear wave velocity profiles in semi-infinite media using scalar waves. Using surficial responses caused by probing waves, a reconstruction of the material profile is sought using a Gauss-Newton full-waveform inversion method in a two-dimensional domain truncated by perfectly matched layer (PML) wave-absorbing boundaries. The PML is introduced to limit the semi-infinite extent of the half-space and to prevent reflections from the truncated boundaries. A hybrid unsplit-field PML is formulated in the inversion framework to enable more efficient wave simulations than with a fully mixed PML. The full-waveform inversion method is based on a constrained optimization framework that is implemented using Karush-Kuhn-Tucker (KKT) optimality conditions to minimize the objective functional augmented by PML-endowed wave equations via Lagrange multipliers. The KKT conditions consist of state, adjoint, and control problems, and are solved iteratively to update the shear wave velocity profile of the PML-truncated domain. Numerical examples show that the developed Gauss-Newton inversion method is accurate enough and more efficient than another inversion method. The algorithm's performance is demonstrated by the numerical examples including the case of noisy measurement responses and the case of reduced number of sources and receivers.
Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P; Nordsletten, David A
2014-06-01
The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii-Newton-Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics.
Spatial modeling on the upperstream of the Citarum watershed: An application of geoinformatics
NASA Astrophysics Data System (ADS)
Ningrum, Windy Setia; Widyaningsih, Yekti; Indra, Tito Latif
2017-03-01
The Citarum watershed is the longest and the largest watershed in West Java, Indonesia, located at 106°51'36''-107°51' E and 7°19'-6°24'S across 10 districts, and serves as the water supply for over 15 million people. In this area, the water criticality index is concerned to reach the balance between water supply and water demand, so that in the dry season, the watershed is still able to meet the water needs of the society along the Citarum river. The objective of this research is to evaluate the water criticality index of Citarum watershed area using spatial model to overcome the spatial dependencies in the data. The result of Lagrange multiplier diagnostics for spatial dependence results are LM-err = 34.6 (p-value = 4.1e-09) and LM-lag = 8.05 (p-value = 0.005), then modeling using Spatial Lag Model (SLM) and Spatial Error Model (SEM) were conducted. The likelihood ratio test show that both of SLM dan SEM model is better than OLS model in modeling water criticality index in Citarum watershed. The AIC value of SLM and SEM model are 78.9 and 51.4, then the SEM model is better than SLM model in predicting water criticality index in Citarum watershed.
Wang, Ke; Yu, Yang-Xin; Gao, Guang-Hua
2008-05-14
A density functional theory (DFT) in the framework of cell model is proposed to calculate the structural and thermodynamic properties of aqueous DNA-electrolyte solution with finite DNA concentrations. The hard-sphere contribution to the excess Helmholtz energy functional is derived from the modified fundamental measure theory, and the electrostatic interaction is evaluated through a quadratic functional Taylor expansion around a uniform fluid. The electroneutrality in the cell leads to a variational equation with a constraint. Since the reference fluid is selected to be a bulk phase, the Lagrange multiplier proves to be the potential drop across the cell boundary (Donnan potential). The ion profiles and electrostatic potential profiles in the cell are calculated from the present DFT-cell model. Our DFT-cell model gives better prediction of ion profiles than the Poisson-Boltzmann (PB)- or modified PB-cell models when compared to the molecular simulation data. The effects of polyelectrolyte concentration, ion size, and added-salt concentration on the electrostatic potential difference between the DNA surface and the cell boundary are investigated. The expression of osmotic coefficient is derived from the general formula of grand potential. The osmotic coefficients predicted by the DFT are lower than the PB results and are closer to the simulation results and experimental data.
Model-based control strategies for systems with constraints of the program type
NASA Astrophysics Data System (ADS)
Jarzębowska, Elżbieta
2006-08-01
The paper presents a model-based tracking control strategy for constrained mechanical systems. Constraints we consider can be material and non-material ones referred to as program constraints. The program constraint equations represent tasks put upon system motions and they can be differential equations of orders higher than one or two, and be non-integrable. The tracking control strategy relies upon two dynamic models: a reference model, which is a dynamic model of a system with arbitrary order differential constraints and a dynamic control model. The reference model serves as a motion planner, which generates inputs to the dynamic control model. It is based upon a generalized program motion equations (GPME) method. The method enables to combine material and program constraints and merge them both into the motion equations. Lagrange's equations with multipliers are the peculiar case of the GPME, since they can be applied to systems with constraints of first orders. Our tracking strategy referred to as a model reference program motion tracking control strategy enables tracking of any program motion predefined by the program constraints. It extends the "trajectory tracking" to the "program motion tracking". We also demonstrate that our tracking strategy can be extended to a hybrid program motion/force tracking.
Reduction technique for tire contact problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1995-01-01
A reduction technique and a computational procedure are presented for predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of the reduction technique, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface.
NASA Astrophysics Data System (ADS)
Marques, G.; Fraga, C. C. S.; Medellin-Azuara, J.
2016-12-01
The expansion and operation of urban water supply systems under growing demands, hydrologic uncertainty and water scarcity requires a strategic combination of supply sources for reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources involves integration of long and short term planning to determine what and when to expand, and how much to use of each supply source accounting for interest rates, economies of scale and hydrologic variability. This research presents an integrated methodology coupling dynamic programming optimization with quadratic programming to optimize the expansion (long term) and operations (short term) of multiple water supply alternatives. Lagrange Multipliers produced by the short-term model provide a signal about the marginal opportunity cost of expansion to the long-term model, in an iterative procedure. A simulation model hosts the water supply infrastructure and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions; (b) evaluation of water transfers between urban supply systems; and (c) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion.
Code of Federal Regulations, 2010 CFR
2010-01-01
... significantly change the distribution of external or internal loads, this redistribution must be taken into... loads multiplied by prescribed factors of safety). Unless otherwise provided, prescribed loads are limit...
NASA Astrophysics Data System (ADS)
Sharma, Pramod; Das, Soumitra; Vatsa, Rajesh K.
2017-07-01
Systematic manipulation of ionic-outcome in laser-cluster interaction process has been realized for studies carried out on tetramethyltin (TMT) clusters under picosecond laser conditions, determined by choice of laser wavelength and intensity. As a function of laser intensity, TMT clusters exhibit gradual enhancement in overall ionization of its cluster constituents, up to a saturation level of ionization, which was distinct for different wavelengths (266, 355, and 532 nm). Simultaneously, systematic appearance of higher multiply charged atomic ions and shift in relative abundance of multiply charged atomic ions towards higher charge state was observed, using time-of-flight mass spectrometer. At saturation level, multiply charged atomic ions up to (C2+, Sn2+) at 266 nm, (C4+, Sn4+) at 355 nm, and (C4+, Sn6+) at 532 nm were detected. In addition, at 355 nm intra-cluster ion chemistry within the ionized cluster leads to generation of molecular hydrogen ion (H2 +) and triatomic molecular hydrogen ion (H3 +). Generation of multiply charged atomic ions is ascribed to efficient coupling of laser pulse with the cluster media, facilitated by inner-ionized electrons produced within the cluster, at the leading edge of laser pulse. Role of inner-ionized electrons is authenticated by measuring kinetic energy distribution of electrons liberated upon disintegration of excessively ionized cluster, under the influence of picosecond laser pulse.
Comparison of Numerical Modeling Methods for Soil Vibration Cutting
NASA Astrophysics Data System (ADS)
Jiang, Jiandong; Zhang, Enguang
2018-01-01
In this paper, we studied the appropriate numerical simulation method for vibration soil cutting. Three numerical simulation methods, commonly used for uniform speed soil cutting, Lagrange, ALE and DEM, are analyzed. Three models of vibration soil cutting simulation model are established by using ls-dyna.The applicability of the three methods to this problem is analyzed in combination with the model mechanism and simulation results. Both the Lagrange method and the DEM method can show the force oscillation of the tool and the large deformation of the soil in the vibration cutting. Lagrange method shows better effect of soil debris breaking. Because of the poor stability of ALE method, it is not suitable to use soil vibration cutting problem.
Price schedules coordination for electricity pool markets
NASA Astrophysics Data System (ADS)
Legbedji, Alexis Motto
2002-04-01
We consider the optimal coordination of a class of mathematical programs with equilibrium constraints, which is formally interpreted as a resource-allocation problem. Many decomposition techniques were proposed to circumvent the difficulty of solving large systems with limited computer resources. The considerable improvement in computer architecture has allowed the solution of large-scale problems with increasing speed. Consequently, interest in decomposition techniques has waned. Nonetheless, there is an important class of applications for which decomposition techniques will still be relevant, among others, distributed systems---the Internet, perhaps, being the most conspicuous example---and competitive economic systems. Conceptually, a competitive economic system is a collection of agents that have similar or different objectives while sharing the same system resources. In theory, constructing a large-scale mathematical program and solving it centrally, using currently available computing power can optimize such systems of agents. In practice, however, because agents are self-interested and not willing to reveal some sensitive corporate data, one cannot solve these kinds of coordination problems by simply maximizing the sum of agent's objective functions with respect to their constraints. An iterative price decomposition or Lagrangian dual method is considered best suited because it can operate with limited information. A price-directed strategy, however, can only work successfully when coordinating or equilibrium prices exist, which is not generally the case when a weak duality is unavoidable. Showing when such prices exist and how to compute them is the main subject of this thesis. Among our results, we show that, if the Lagrangian function of a primal program is additively separable, price schedules coordination may be attained. The prices are Lagrange multipliers, and are also the decision variables of a dual program. In addition, we propose a new form of augmented or nonlinear pricing, which is an example of the use of penalty functions in mathematical programming. Applications are drawn from mathematical programming problems of the form arising in electric power system scheduling under competition.
Particle Clogging in Filter Media of Embankment Dams: A Numerical and Experimental Study
NASA Astrophysics Data System (ADS)
Antoun, T.; Kanarska, Y.; Ezzedine, S. M.; Lomov, I.; Glascoe, L. G.; Smith, J.; Hall, R. L.; Woodson, S. C.
2013-12-01
The safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique which enforces the correct in-domain computational boundary conditions inside and on the boundary of the particles. The numerical code is validated to experiments conducted at the US Army Corps of Engineering and Research Development Center (ERDC). These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was sponsored by the Department of Homeland Security (DHS), Science and Technology Directorate, Homeland Security Advanced Research Projects Agency (HSARPA).
Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport
NASA Astrophysics Data System (ADS)
Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.
2016-12-01
Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Gurvinderjit; Singh, Bhajan, E-mail: bhajan2k1@yahoo.co.in; Sandhu, B. S.
2015-08-28
The present measurements are carried out to investigate the multiple scattering of 662 keV gamma photons emerging from targets of binary alloys (brass and soldering material). The scattered photons are detected by 51 mm × 51 mm NaI(Tl) scintillation detector whose response unscrambling converting the observed pulse–height distribution to a true photon energy spectrum, is obtained with the help of 10 × 10 inverse response matrix. The numbers of multiply scattered events, having same energy as in the singly scattered distribution, first increases with target thickness and then saturate. The application of response function of scintillation detector does not result in anymore » change of measured saturation thickness. Monte Carlo calculation supports the present experimental results.« less
NASA Astrophysics Data System (ADS)
Iwayama, H.; Sugishima, A.; Nagaya, K.; Yao, M.; Fukuzawa, H.; Motomura, K.; Liu, X.-J.; Yamada, A.; Wang, C.; Ueda, K.; Saito, N.; Nagasono, M.; Tono, K.; Yabashi, M.; Ishikawa, T.; Ohashi, H.; Kimura, H.; Togashi, T.
2010-08-01
The emission of highly charged ions from Xe clusters exposed to intense extreme ultraviolet laser pulses (λ ~ 52 nm) from the free electron laser in Japan was investigated using ion momentum spectroscopy. With increasing average cluster size, we observed multiply charged ions Xez + up to z = 3. From kinetic energy distributions, we found that multiply charged ions were generated near the cluster surface. Our results suggest that charges are inhomogeneously redistributed in the cluster to lower the total energy stored in the clusters.
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1984-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathode of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the Earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1986-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathodes of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
Massengill, L W; Mundie, D B
1992-01-01
A neural network IC based on a dynamic charge injection is described. The hardware design is space and power efficient, and achieves massive parallelism of analog inner products via charge-based multipliers and spatially distributed summing buses. Basic synaptic cells are constructed of exponential pulse-decay modulation (EPDM) dynamic injection multipliers operating sequentially on propagating signal vectors and locally stored analog weights. Individually adjustable gain controls on each neutron reduce the effects of limited weight dynamic range. A hardware simulator/trainer has been developed which incorporates the physical (nonideal) characteristics of actual circuit components into the training process, thus absorbing nonlinearities and parametric deviations into the macroscopic performance of the network. Results show that charge-based techniques may achieve a high degree of neural density and throughput using standard CMOS processes.
Distributed optical fiber-based monitoring approach of spatial seepage behavior in dike engineering
NASA Astrophysics Data System (ADS)
Su, Huaizhi; Ou, Bin; Yang, Lifu; Wen, Zhiping
2018-07-01
The failure caused by seepage is the most common one in dike engineering. As to the characteristics of seepage in dike, such as longitudinal extension engineering, the randomness, strong concealment and small initial quantity order, by means of distributed fiber temperature sensor system (DTS), adopting an improved optical fiber layer layout scheme, the location of initial interpolation point of the saturation line is obtained. With the barycentric Lagrange interpolation collocation method (BLICM), the infiltrated surface of dike full-section is generated. Combined with linear optical fiber monitoring seepage method, BLICM is applied in an engineering case, which shows that a real-time seepage monitoring technique is presented in full-section of dike based on the combination method.
General invertible transformation and physical degrees of freedom
NASA Astrophysics Data System (ADS)
Takahashi, Kazufumi; Motohashi, Hayato; Suyama, Teruaki; Kobayashi, Tsutomu
2017-04-01
An invertible field transformation is such that the old field variables correspond one-to-one to the new variables. As such, one may think that two systems that are related by an invertible transformation are physically equivalent. However, if the transformation depends on field derivatives, the equivalence between the two systems is nontrivial due to the appearance of higher derivative terms in the equations of motion. To address this problem, we prove the following theorem on the relation between an invertible transformation and Euler-Lagrange equations: If the field transformation is invertible, then any solution of the original set of Euler-Lagrange equations is mapped to a solution of the new set of Euler-Lagrange equations, and vice versa. We also present applications of the theorem to scalar-tensor theories.
Particle Swarm Optimization of Low-Thrust, Geocentric-to-Halo-Orbit Transfers
NASA Astrophysics Data System (ADS)
Abraham, Andrew J.
Missions to Lagrange points are becoming increasingly popular amongst spacecraft mission planners. Lagrange points are locations in space where the gravity force from two bodies, and the centrifugal force acting on a third body, cancel. To date, all spacecraft that have visited a Lagrange point have done so using high-thrust, chemical propulsion. Due to the increasing availability of low-thrust (high efficiency) propulsive devices, and their increasing capability in terms of fuel efficiency and instantaneous thrust, it has now become possible for a spacecraft to reach a Lagrange point orbit without the aid of chemical propellant. While at any given time there are many paths for a low-thrust trajectory to take, only one is optimal. The traditional approach to spacecraft trajectory optimization utilizes some form of gradient-based algorithm. While these algorithms offer numerous advantages, they also have a few significant shortcomings. The three most significant shortcomings are: (1) the fact that an initial guess solution is required to initialize the algorithm, (2) the radius of convergence can be quite small and can allow the algorithm to become trapped in local minima, and (3) gradient information is not always assessable nor always trustworthy for a given problem. To avoid these problems, this dissertation is focused on optimizing a low-thrust transfer trajectory from a geocentric orbit to an Earth-Moon, L1, Lagrange point orbit using the method of Particle Swarm Optimization (PSO). The PSO method is an evolutionary heuristic that was originally written to model birds swarming to locate hidden food sources. This PSO method will enable the exploration of the invariant stable manifold of the target Lagrange point orbit in an effort to optimize the spacecraft's low-thrust trajectory. Examples of these optimized trajectories are presented and contrasted with those found using traditional, gradient-based approaches. In summary, the results of this dissertation find that the PSO method does, indeed, successfully optimize the low-thrust trajectory transfer problem without the need for initial guessing. Furthermore, a two-degree-of-freedom PSO problem formulation significantly outperformed a one-degree-of-freedom formulation by at least an order of magnitude, in terms of CPU time. Finally, the PSO method is also used to solve a traditional, two-burn, impulsive transfer to a Lagrange point orbit using a hybrid optimization algorithm that incorporates a gradient-based shooting algorithm as a pre-optimizer. Surprisingly, the results of this study show that "fast" transfers outperform "slow" transfers in terms of both Deltav and time of flight.
Ipsen, Andreas
2017-02-03
Here, the mass peak centroid is a quantity that is at the core of mass spectrometry (MS). However, despite its central status in the field, models of its statistical distribution are often chosen quite arbitrarily and without attempts at establishing a proper theoretical justification for their use. Recent work has demonstrated that for mass spectrometers employing analog-to-digital converters (ADCs) and electron multipliers, the statistical distribution of the mass peak intensity can be described via a relatively simple model derived essentially from first principles. Building on this result, the following article derives the corresponding statistical distribution for the mass peak centroidsmore » of such instruments. It is found that for increasing signal strength, the centroid distribution converges to a Gaussian distribution whose mean and variance are determined by physically meaningful parameters and which in turn determine bias and variability of the m/z measurements of the instrument. Through the introduction of the concept of “pulse-peak correlation”, the model also elucidates the complicated relationship between the shape of the voltage pulses produced by the preamplifier and the mean and variance of the centroid distribution. The predictions of the model are validated with empirical data and with Monte Carlo simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ipsen, Andreas
Here, the mass peak centroid is a quantity that is at the core of mass spectrometry (MS). However, despite its central status in the field, models of its statistical distribution are often chosen quite arbitrarily and without attempts at establishing a proper theoretical justification for their use. Recent work has demonstrated that for mass spectrometers employing analog-to-digital converters (ADCs) and electron multipliers, the statistical distribution of the mass peak intensity can be described via a relatively simple model derived essentially from first principles. Building on this result, the following article derives the corresponding statistical distribution for the mass peak centroidsmore » of such instruments. It is found that for increasing signal strength, the centroid distribution converges to a Gaussian distribution whose mean and variance are determined by physically meaningful parameters and which in turn determine bias and variability of the m/z measurements of the instrument. Through the introduction of the concept of “pulse-peak correlation”, the model also elucidates the complicated relationship between the shape of the voltage pulses produced by the preamplifier and the mean and variance of the centroid distribution. The predictions of the model are validated with empirical data and with Monte Carlo simulations.« less
Pointwise convergence of derivatives of Lagrange interpolation polynomials for exponential weights
NASA Astrophysics Data System (ADS)
Damelin, S. B.; Jung, H. S.
2005-01-01
For a general class of exponential weights on the line and on (-1,1), we study pointwise convergence of the derivatives of Lagrange interpolation. Our weights include even weights of smooth polynomial decay near +/-[infinity] (Freud weights), even weights of faster than smooth polynomial decay near +/-[infinity] (Erdos weights) and even weights which vanish strongly near +/-1, for example Pollaczek type weights.
Space Instrument Optimization by Implementing of Generic Three Bodies Circular Restricted Problem
NASA Astrophysics Data System (ADS)
Nejat, Cyrus
2011-01-01
In this study, the main discussion emphasizes on the spacecraft operation with a concentration on stationary points in space. To achieve these objectives, the circular restricted problem was solved for selected approaches. The equations of motion of three body restricted problem was demonstrated to apply in cases other than Lagrange's (1736-1813 A.D.) achievements, by means of the purposed CN (Cyrus Nejat) theorem along with appropriate comments. In addition to five Lagrange, two other points, CN1 and CN2 were found to be in unstable equilibrium points in a very large distance respect to Lagrange points, but stable at infinity. A very interesting simulation of Milky Way Galaxy and Andromeda Galaxy were created to find the Lagrange points, CN points (Cyrus Nejat Points), and CN lines (Cyrus Nejat Lines). The equations of motion were rearranged such a way that the transfer trajectory would be conical, by means of decoupling concept. The main objective was to make a halo orbit transfer about CN lines. The author purposes therefore that all of the corresponding sizing design that they must be developed by optimization techniques would be considered in future approaches. The optimization techniques are sufficient procedures to search for the most ideal response of a system.
NASA Technical Reports Server (NTRS)
Lamar, J. E.
1994-01-01
This program represents a subsonic aerodynamic method for determining the mean camber surface of trimmed noncoplaner planforms with minimum vortex drag. With this program, multiple surfaces can be designed together to yield a trimmed configuration with minimum induced drag at some specified lift coefficient. The method uses a vortex-lattice and overcomes previous difficulties with chord loading specification. A Trefftz plane analysis is used to determine the optimum span loading for minimum drag. The program then solves for the mean camber surface of the wing associated with this loading. Pitching-moment or root-bending-moment constraints can be employed at the design lift coefficient. Sensitivity studies of vortex-lattice arrangements have been made with this program and comparisons with other theories show generally good agreement. The program is very versatile and has been applied to isolated wings, wing-canard configurations, a tandem wing, and a wing-winglet configuration. The design problem solved with this code is essentially an optimization one. A subsonic vortex-lattice is used to determine the span load distribution(s) on bent lifting line(s) in the Trefftz plane. A Lagrange multiplier technique determines the required loading which is used to calculate the mean camber slopes, which are then integrated to yield the local elevation surface. The problem of determining the necessary circulation matrix is simplified by having the chordwise shape of the bound circulation remain unchanged across each span, though the chordwise shape may vary from one planform to another. The circulation matrix is obtained by calculating the spanwise scaling of the chordwise shapes. A chordwise summation of the lift and pitching-moment is utilized in the Trefftz plane solution on the assumption that the trailing wake does not roll up and that the general configuration has specifiable chord loading shapes. VLMD is written in FORTRAN for IBM PC series and compatible computers running MS-DOS. This program requires 360K of RAM for execution. The Ryan McFarland FORTRAN compiler and PLINK86 are required to recompile the source code; however, a sample executable is provided on the diskette. The standard distribution medium for VLMD is a 5.25 inch 360K MS-DOS format diskette. VLMD was originally developed for use on CDC 6000 series computers in 1976. It was originally ported to the IBM PC in 1986, and, after minor modifications, the IBM PC port was released in 1993.
Shuang, Qing; Zhang, Mingyuan; Yuan, Yongbo
2014-01-01
As a mean of supplying water, Water distribution system (WDS) is one of the most important complex infrastructures. The stability and reliability are critical for urban activities. WDSs can be characterized by networks of multiple nodes (e.g. reservoirs and junctions) and interconnected by physical links (e.g. pipes). Instead of analyzing highest failure rate or highest betweenness, reliability of WDS is evaluated by introducing hydraulic analysis and cascading failures (conductive failure pattern) from complex network. The crucial pipes are identified eventually. The proposed methodology is illustrated by an example. The results show that the demand multiplier has a great influence on the peak of reliability and the persistent time of the cascading failures in its propagation in WDS. The time period when the system has the highest reliability is when the demand multiplier is less than 1. There is a threshold of tolerance parameter exists. When the tolerance parameter is less than the threshold, the time period with the highest system reliability does not meet minimum value of demand multiplier. The results indicate that the system reliability should be evaluated with the properties of WDS and the characteristics of cascading failures, so as to improve its ability of resisting disasters. PMID:24551102
2014-04-01
The CG and DG horizontal discretization employs high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...and DG horizontal discretization employs high-order nodal basis functions 29 associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...Inside 235 each element we build ( 1)N + Gauss-Lobatto- Legendre (GLL) quadrature points, where N 236 indicate the polynomial order of the basis
PID position regulation in one-degree-of-freedom Euler-Lagrange systems actuated by a PMSM
NASA Astrophysics Data System (ADS)
Verastegui-Galván, J.; Hernández-Guzmán, V. M.; Orrante-Sakanassi, J.
2018-02-01
This paper is concerned with position regulation in one-degree-of-freedom Euler-Lagrange Systems. We consider that the mechanical subsystem is actuated by a permanent magnet synchronous motor (PMSM). Our proposal consists of a Proportional-Integral-Derivative (PID) controller for the mechanical subsystem and a slight variation of field oriented control for the PMSM. We take into account the motor electric dynamics during the stability analysis. We present, for the first time, a global asymptotic stability proof for such a control scheme without requiring the mechanical subsystem to naturally possess viscous friction. Finally, as a corollary of our main result we prove global asymptotic stability for output feedback PID regulation of one-degree-of-freedom Euler-Lagrange systems when generated torque is considered as the system input, i.e. when the electric dynamics of PMSM's is not taken into account.
NASA Astrophysics Data System (ADS)
Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu
2018-05-01
A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.
Albin, Thomas J; Vink, Peter
2015-01-01
Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.
A technique for estimating the absolute gain of a photomultiplier tube
NASA Astrophysics Data System (ADS)
Takahashi, M.; Inome, Y.; Yoshii, S.; Bamba, A.; Gunji, S.; Hadasch, D.; Hayashida, M.; Katagiri, H.; Konno, Y.; Kubo, H.; Kushida, J.; Nakajima, D.; Nakamori, T.; Nagayoshi, T.; Nishijima, K.; Nozaki, S.; Mazin, D.; Mashuda, S.; Mirzoyan, R.; Ohoka, H.; Orito, R.; Saito, T.; Sakurai, S.; Takeda, J.; Teshima, M.; Terada, Y.; Tokanai, F.; Yamamoto, T.; Yoshida, T.
2018-06-01
Detection of low-intensity light relies on the conversion of photons to photoelectrons, which are then multiplied and detected as an electrical signal. To measure the actual intensity of the light, one must know the factor by which the photoelectrons have been multiplied. To obtain this amplification factor, we have developed a procedure for estimating precisely the signal caused by a single photoelectron. The method utilizes the fact that the photoelectrons conform to a Poisson distribution. The average signal produced by a single photoelectron can then be estimated from the number of noise events, without requiring analysis of the distribution of the signal produced by a single photoelectron. The signal produced by one or more photoelectrons can be estimated experimentally without any assumptions. This technique, and an example of the analysis of a signal from a photomultiplier tube, are described in this study.
Predicting Drug-Target Interactions With Multi-Information Fusion.
Peng, Lihong; Liao, Bo; Zhu, Wen; Li, Zejun; Li, Keqin
2017-03-01
Identifying potential associations between drugs and targets is a critical prerequisite for modern drug discovery and repurposing. However, predicting these associations is difficult because of the limitations of existing computational methods. Most models only consider chemical structures and protein sequences, and other models are oversimplified. Moreover, datasets used for analysis contain only true-positive interactions, and experimentally validated negative samples are unavailable. To overcome these limitations, we developed a semi-supervised based learning framework called NormMulInf through collaborative filtering theory by using labeled and unlabeled interaction information. The proposed method initially determines similarity measures, such as similarities among samples and local correlations among the labels of the samples, by integrating biological information. The similarity information is then integrated into a robust principal component analysis model, which is solved using augmented Lagrange multipliers. Experimental results on four classes of drug-target interaction networks suggest that the proposed approach can accurately classify and predict drug-target interactions. Part of the predicted interactions are reported in public databases. The proposed method can also predict possible targets for new drugs and can be used to determine whether atropine may interact with alpha1B- and beta1- adrenergic receptors. Furthermore, the developed technique identifies potential drugs for new targets and can be used to assess whether olanzapine and propiomazine may target 5HT2B. Finally, the proposed method can potentially address limitations on studies of multitarget drugs and multidrug targets.
Statistical mechanics of shell models for two-dimensional turbulence
NASA Astrophysics Data System (ADS)
Aurell, E.; Boffetta, G.; Crisanti, A.; Frick, P.; Paladin, G.; Vulpiani, A.
1994-12-01
We study shell models that conserve the analogs of energy and enstrophy and hence are designed to mimic fluid turbulence in two-dimensions (2D). The main result is that the observed state is well described as a formal statistical equilibrium, closely analogous to the approach to two-dimensional ideal hydrodynamics of Onsager [Nuovo Cimento Suppl. 6, 279 (1949)], Hopf [J. Rat. Mech. Anal. 1, 87 (1952)], and Lee [Q. Appl. Math. 10, 69 (1952)]. In the presence of forcing and dissipation we observe a forward flux of enstrophy and a backward flux of energy. These fluxes can be understood as mean diffusive drifts from a source to two sinks in a system which is close to local equilibrium with Lagrange multipliers (``shell temperatures'') changing slowly with scale. This is clear evidence that the simplest shell models are not adequate to reproduce the main features of two-dimensional turbulence. The dimensional predictions on the power spectra from a supposed forward cascade of enstrophy and from one branch of the formal statistical equilibrium coincide in these shell models in contrast to the corresponding predictions for the Navier-Stokes and Euler equations in 2D. This coincidence has previously led to the mistaken conclusion that shell models exhibit a forward cascade of enstrophy. We also study the dynamical properties of the models and the growth of perturbations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkatesan, R.C., E-mail: ravi@systemsresearchcorp.com; Plastino, A., E-mail: plastino@fisica.unlp.edu.ar
The (i) reciprocity relations for the relative Fisher information (RFI, hereafter) and (ii) a generalized RFI–Euler theorem are self-consistently derived from the Hellmann–Feynman theorem. These new reciprocity relations generalize the RFI–Euler theorem and constitute the basis for building up a mathematical Legendre transform structure (LTS, hereafter), akin to that of thermodynamics, that underlies the RFI scenario. This demonstrates the possibility of translating the entire mathematical structure of thermodynamics into a RFI-based theoretical framework. Virial theorems play a prominent role in this endeavor, as a Schrödinger-like equation can be associated to the RFI. Lagrange multipliers are determined invoking the RFI–LTS linkmore » and the quantum mechanical virial theorem. An appropriate ansatz allows for the inference of probability density functions (pdf’s, hereafter) and energy-eigenvalues of the above mentioned Schrödinger-like equation. The energy-eigenvalues obtained here via inference are benchmarked against established theoretical and numerical results. A principled theoretical basis to reconstruct the RFI-framework from the FIM framework is established. Numerical examples for exemplary cases are provided. - Highlights: • Legendre transform structure for the RFI is obtained with the Hellmann–Feynman theorem. • Inference of the energy-eigenvalues of the SWE-like equation for the RFI is accomplished. • Basis for reconstruction of the RFI framework from the FIM-case is established. • Substantial qualitative and quantitative distinctions with prior studies are discussed.« less
A finite element-based algorithm for rubbing induced vibration prediction in rotors
NASA Astrophysics Data System (ADS)
Behzad, Mehdi; Alvandi, Mehdi; Mba, David; Jamali, Jalil
2013-10-01
In this paper, an algorithm is developed for more realistic investigation of rotor-to-stator rubbing vibration, based on finite element theory with unilateral contact and friction conditions. To model the rotor, cross sections are assumed to be radially rigid. A finite element discretization based on traditional beam theories which sufficiently accounts for axial and transversal flexibility of the rotor is used. A general finite element discretization model considering inertial and viscoelastic characteristics of the stator is used for modeling the stator. Therefore, for contact analysis, only the boundary of the stator is discretized. The contact problem is defined as the contact between the circular rigid cross section of the rotor and “nodes” of the stator only. Next, Gap function and contact conditions are described for the contact problem. Two finite element models of the rotor and the stator are coupled via the Lagrange multipliers method in order to obtain the constrained equation of motion. A case study of the partial rubbing is simulated using the algorithm. The synchronous and subsynchronous responses of the partial rubbing are obtained for different rotational speeds. In addition, a sensitivity analysis is carried out with respect to the initial clearance, the stator stiffness, the damping parameter, and the coefficient of friction. There is a good agreement between the result of this research and the experimental result in the literature.
Freundorfer, Katrin; Kats, Daniel; Korona, Tatiana; Schütz, Martin
2010-12-28
A new multistate local CC2 response method for calculating excitation energies and first-order properties of excited triplet states in extended molecular systems is presented. The Laplace transform technique is employed to partition the left/right local CC2 eigenvalue problems as well as the linear equations determining the Lagrange multipliers needed for the properties. The doubles part in the equations can then be inverted on-the-fly and only effective equations for the singles part must be solved iteratively. The local approximation presented here is adaptive and state-specific. The density-fitting method is utilized to approximate the electron-repulsion integrals. The accuracy of the new method is tested by comparison to canonical reference values for a set of 12 test molecules and 62 excited triplet states. As an illustrative application example, the lowest four triplet states of 3-(5-(5-(4-(bis(4-(hexyloxy)phenyl)amino)phenyl)thiophene-2-yl)thiophene-2-yl)-2-cyanoacrylic acid, an organic sensitizer for solar-cell applications, are computed in the present work. No triplet charge-transfer states are detected among these states. This situation contrasts with the singlet states of this molecule, where the lowest singlet state has been recently found to correspond to an excited state with a pronounced charge-transfer character having a large transition strength.
Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method
Kalchev, Delyan Z.; Lee, C. S.; Villa, U.; ...
2016-09-22
Here, we propose two multilevel spectral techniques for constructing coarse discretization spaces for saddle-point problems corresponding to PDEs involving a divergence constraint, with a focus on mixed finite element discretizations of scalar self-adjoint second order elliptic equations on general unstructured grids. We use element agglomeration algebraic multigrid (AMGe), which employs coarse elements that can have nonstandard shape since they are agglomerates of fine-grid elements. The coarse basis associated with each agglomerated coarse element is constructed by solving local eigenvalue problems and local mixed finite element problems. This construction leads to stable upscaled coarse spaces and guarantees the inf-sup compatibility ofmore » the upscaled discretization. Also, the approximation properties of these upscaled spaces improve by adding more local eigenfunctions to the coarse spaces. The higher accuracy comes at the cost of additional computational effort, as the sparsity of the resulting upscaled coarse discretization (referred to as operator complexity) deteriorates when we introduce additional functions in the coarse space. We also provide an efficient solver for the coarse (upscaled) saddle-point system by employing hybridization, which leads to a symmetric positive definite (s.p.d.) reduced system for the Lagrange multipliers, and to solve the latter s.p.d. system, we use our previously developed spectral AMGe solver. Numerical experiments, in both two and three dimensions, are provided to illustrate the efficiency of the proposed upscaling technique.« less
Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalchev, Delyan Z.; Lee, C. S.; Villa, U.
Here, we propose two multilevel spectral techniques for constructing coarse discretization spaces for saddle-point problems corresponding to PDEs involving a divergence constraint, with a focus on mixed finite element discretizations of scalar self-adjoint second order elliptic equations on general unstructured grids. We use element agglomeration algebraic multigrid (AMGe), which employs coarse elements that can have nonstandard shape since they are agglomerates of fine-grid elements. The coarse basis associated with each agglomerated coarse element is constructed by solving local eigenvalue problems and local mixed finite element problems. This construction leads to stable upscaled coarse spaces and guarantees the inf-sup compatibility ofmore » the upscaled discretization. Also, the approximation properties of these upscaled spaces improve by adding more local eigenfunctions to the coarse spaces. The higher accuracy comes at the cost of additional computational effort, as the sparsity of the resulting upscaled coarse discretization (referred to as operator complexity) deteriorates when we introduce additional functions in the coarse space. We also provide an efficient solver for the coarse (upscaled) saddle-point system by employing hybridization, which leads to a symmetric positive definite (s.p.d.) reduced system for the Lagrange multipliers, and to solve the latter s.p.d. system, we use our previously developed spectral AMGe solver. Numerical experiments, in both two and three dimensions, are provided to illustrate the efficiency of the proposed upscaling technique.« less
An Information Theory Approach to Nonlinear, Nonequilibrium Thermodynamics
NASA Astrophysics Data System (ADS)
Rogers, David M.; Beck, Thomas L.; Rempe, Susan B.
2011-10-01
Using the problem of ion channel thermodynamics as an example, we illustrate the idea of building up complex thermodynamic models by successively adding physical information. We present a new formulation of information algebra that generalizes methods of both information theory and statistical mechanics. From this foundation we derive a theory for ion channel kinetics, identifying a nonequilibrium `process' free energy functional in addition to the well-known integrated work functionals. The Gibbs-Maxwell relation for the free energy functional is a Green-Kubo relation, applicable arbitrarily far from equilibrium, that captures the effect of non-local and time-dependent behavior from transient thermal and mechanical driving forces. Comparing the physical significance of the Lagrange multipliers to the canonical ensemble suggests definitions of nonequilibrium ensembles at constant capacitance or inductance in addition to constant resistance. Our result is that statistical mechanical descriptions derived from a few primitive algebraic operations on information can be used to create experimentally-relevant and computable models. By construction, these models may use information from more detailed atomistic simulations. Two surprising consequences to be explored in further work are that (in)distinguishability factors are automatically predicted from the problem formulation and that a direct analogue of the second law for thermodynamic entropy production is found by considering information loss in stochastic processes. The information loss identifies a novel contribution from the instantaneous information entropy that ensures non-negative loss.
Application of target costing in machining
NASA Astrophysics Data System (ADS)
Gopalakrishnan, Bhaskaran; Kokatnur, Ameet; Gupta, Deepak P.
2004-11-01
In today's intensely competitive and highly volatile business environment, consistent development of low cost and high quality products meeting the functionality requirements is a key to a company's survival. Companies continuously strive to reduce the costs while still producing quality products to stay ahead in the competition. Many companies have turned to target costing to achieve this objective. Target costing is a structured approach to determine the cost at which a proposed product, meeting the quality and functionality requirements, must be produced in order to generate the desired profits. It subtracts the desired profit margin from the company's selling price to establish the manufacturing cost of the product. Extensive literature review revealed that companies in automotive, electronic and process industries have reaped the benefits of target costing. However target costing approach has not been applied in the machining industry, but other techniques based on Geometric Programming, Goal Programming, and Lagrange Multiplier have been proposed for application in this industry. These models follow a forward approach, by first selecting a set of machining parameters, and then determining the machining cost. Hence in this study we have developed an algorithm to apply the concepts of target costing, which is a backward approach that selects the machining parameters based on the required machining costs, and is therefore more suitable for practical applications in process improvement and cost reduction. A target costing model was developed for turning operation and was successfully validated using practical data.
An immersed boundary method for simulating vesicle dynamics in three dimensions
NASA Astrophysics Data System (ADS)
Seol, Yunchang; Hu, Wei-Fan; Kim, Yongsam; Lai, Ming-Chih
2016-10-01
We extend our previous immersed boundary (IB) method for 3D axisymmetric inextensible vesicle in Navier-Stokes flows (Hu et al., 2014 [17]) to general three dimensions. Despite a similar spirit in numerical algorithms to the axisymmetric case, the fully 3D numerical implementation is much more complicated and is far from straightforward. A vesicle membrane surface is known to be incompressible and exhibits bending resistance. As in 3D axisymmetric case, instead of keeping the vesicle locally incompressible, we adopt a modified elastic tension energy to make the vesicle surface patch nearly incompressible so that solving the unknown tension (Lagrange multiplier for the incompressible constraint) can be avoided. Nevertheless, the new elastic force derived from the modified tension energy has exactly the same mathematical form as the original one except the different definitions of tension. The vesicle surface is discretized on a triangular mesh where the elastic tension and bending force are calculated on each vertex (Lagrangian marker in the IB method) of the triangulation. A series of numerical tests on the present scheme are conducted to illustrate the robustness and applicability of the method. We perform the convergence study for the immersed boundary forces and the fluid velocity field. We then study the vesicle dynamics in various flows such as quiescent, simple shear, and gravitational flows. Our numerical results show good agreements with those obtained in previous theoretical, experimental and numerical studies.
NASA Astrophysics Data System (ADS)
Dai, Yimian; Wu, Yiquan; Song, Yu; Guo, Jun
2017-03-01
To further enhance the small targets and suppress the heavy clutters simultaneously, a robust non-negative infrared patch-image model via partial sum minimization of singular values is proposed. First, the intrinsic reason behind the undesirable performance of the state-of-the-art infrared patch-image (IPI) model when facing extremely complex backgrounds is analyzed. We point out that it lies in the mismatching of IPI model's implicit assumption of a large number of observations with the reality of deficient observations of strong edges. To fix this problem, instead of the nuclear norm, we adopt the partial sum of singular values to constrain the low-rank background patch-image, which could provide a more accurate background estimation and almost eliminate all the salient residuals in the decomposed target image. In addition, considering the fact that the infrared small target is always brighter than its adjacent background, we propose an additional non-negative constraint to the sparse target patch-image, which could not only wipe off more undesirable components ulteriorly but also accelerate the convergence rate. Finally, an algorithm based on inexact augmented Lagrange multiplier method is developed to solve the proposed model. A large number of experiments are conducted demonstrating that the proposed model has a significant improvement over the other nine competitive methods in terms of both clutter suppressing performance and convergence rate.
Unified cosmic history in modified gravity: From F(R) theory to Lorentz non-invariant models
NASA Astrophysics Data System (ADS)
Nojiri, Shin'Ichi; Odintsov, Sergei D.
2011-08-01
The classical generalization of general relativity is considered as the gravitational alternative for a unified description of the early-time inflation with late-time cosmic acceleration. The structure and cosmological properties of a number of modified theories, including traditional F(R) and Hořava-Lifshitz F(R) gravity, scalar-tensor theory, string-inspired and Gauss-Bonnet theory, non-local gravity, non-minimally coupled models, and power-counting renormalizable covariant gravity are discussed. Different representations of and relations between such theories are investigated. It is shown that some versions of the above theories may be consistent with local tests and may provide a qualitatively reasonable unified description of inflation with the dark energy epoch. The cosmological reconstruction of different modified gravities is provided in great detail. It is demonstrated that eventually any given universe evolution may be reconstructed for the theories under consideration, and the explicit reconstruction is applied to an accelerating spatially flat Friedmann-Robertson-Walker (FRW) universe. Special attention is paid to Lagrange multiplier constrained and conventional F(R) gravities, for latter F(R) theory, the effective ΛCDM era and phantom divide crossing acceleration are obtained. The occurrences of the Big Rip and other finite-time future singularities in modified gravity are reviewed along with their solutions via the addition of higher-derivative gravitational invariants.
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.
Learning Robust and Discriminative Subspace With Low-Rank Constraints.
Li, Sheng; Fu, Yun
2016-11-01
In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, H.D.
1991-11-01
Several of the technologies being evaluated for the treatment of waste material involve chemical reactions. Our example is the in situ vitrification (ISV) process where electrical energy is used to melt soil and waste into a ``glass like`` material that immobilizes and encapsulates any residual waste. During the ISV process, various chemical reactions may occur that produce significant amounts of products which must be contained and treated. The APOLLO program was developed to assist in predicting the composition of the gases that are formed. Although the development of this program was directed toward ISV applications, it should be applicable tomore » other technologies where chemical reactions are of interest. This document presents the mathematical methodology of the APOLLO computer code. APOLLO is a computer code that calculates the products of both equilibrium and kinetic chemical reactions. The current version, written in FORTRAN, is readily adaptable to existing transport programs designed for the analysis of chemically reacting flow systems. Separate subroutines EQREACT and KIREACT for equilibrium ad kinetic chemistry respectively have been developed. A full detailed description of the numerical techniques used, which include both Lagrange multiplies and a third-order integrating scheme is presented. Sample test problems are presented and the results are in excellent agreement with those reported in the literature.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, H.D.
1991-11-01
Several of the technologies being evaluated for the treatment of waste material involve chemical reactions. Our example is the in situ vitrification (ISV) process where electrical energy is used to melt soil and waste into a glass like'' material that immobilizes and encapsulates any residual waste. During the ISV process, various chemical reactions may occur that produce significant amounts of products which must be contained and treated. The APOLLO program was developed to assist in predicting the composition of the gases that are formed. Although the development of this program was directed toward ISV applications, it should be applicable tomore » other technologies where chemical reactions are of interest. This document presents the mathematical methodology of the APOLLO computer code. APOLLO is a computer code that calculates the products of both equilibrium and kinetic chemical reactions. The current version, written in FORTRAN, is readily adaptable to existing transport programs designed for the analysis of chemically reacting flow systems. Separate subroutines EQREACT and KIREACT for equilibrium ad kinetic chemistry respectively have been developed. A full detailed description of the numerical techniques used, which include both Lagrange multiplies and a third-order integrating scheme is presented. Sample test problems are presented and the results are in excellent agreement with those reported in the literature.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capela, Fabio; Ramazanov, Sabir, E-mail: fc403@cam.ac.uk, E-mail: Sabir.Ramazanov@ulb.ac.be
At large scales and for sufficiently early times, dark matter is described as a pressureless perfect fluid—dust— non-interacting with Standard Model fields. These features are captured by a simple model with two scalars: a Lagrange multiplier and another playing the role of the velocity potential. That model arises naturally in some gravitational frameworks, e.g., the mimetic dark matter scenario. We consider an extension of the model by means of higher derivative terms, such that the dust solutions are preserved at the background level, but there is a non-zero sound speed at the linear level. We associate this Modified Dust withmore » dark matter, and study the linear evolution of cosmological perturbations in that picture. The most prominent effect is the suppression of their power spectrum for sufficiently large cosmological momenta. This can be relevant in view of the problems that cold dark matter faces at sub-galactic scales, e.g., the missing satellites problem. At even shorter scales, however, perturbations of Modified Dust are enhanced compared to the predictions of more common particle dark matter scenarios. This is a peculiarity of their evolution in radiation dominated background. We also briefly discuss clustering of Modified Dust. We write the system of equations in the Newtonian limit, and sketch the possible mechanism which could prevent the appearance of caustic singularities. The same mechanism may be relevant in light of the core-cusp problem.« less
Advances in reduction techniques for tire contact problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1995-01-01
Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.
Zhang, Jing-Bo; Li, Rui-Qi; Xiang, Xiao-Guo; Manchester, Steven R.; Lin, Li; Wang, Wei; Wen, Jun; Chen, Zhi-Duan
2013-01-01
The hickory genus (Carya) contains ca. 17 species distributed in subtropical and tropical regions of eastern Asia and subtropical to temperate regions of eastern North America. Previously, the phylogenetic relationships between eastern Asian and eastern North American species of Carya were not fully confirmed even with an extensive sampling, biogeographic and diversification patterns had thus never been investigated in a phylogenetic context. We sampled 17 species of Carya and 15 species representing all other genera of the Juglandaceae as outgroups, with eight nuclear and plastid loci to reconstruct the phylogeny of Carya. The phylogenetic positions of seven extinct genera of the Juglandaceae were inferred using morphological characters and the molecular phylogeny as a backbone constraint. Divergence times within Carya were estimated with relaxed Bayesian dating. Biogeographic analyses were performed in DIVA and LAGRANGE. Diversification rates were inferred by LASER and APE packages. Our results support two major clades within Carya, corresponding to the lineages of eastern Asia and eastern North America. The split between the two disjunct clades is estimated to be 21.58 (95% HPD 11.07-35.51) Ma. Genus-level DIVA and LAGRANGE analyses incorporating both extant and extinct genera of the Juglandaceae suggested that Carya originated in North America, and migrated to Eurasia during the early Tertiary via the North Atlantic land bridge. Fragmentation of the distribution caused by global cooling in the late Tertiary resulted in the current disjunction. The diversification rate of hickories in eastern North America appeared to be higher than that in eastern Asia, which is ascribed to greater ecological opportunities, key morphological innovations, and polyploidy. PMID:23875028
Zhang, Jing-Bo; Li, Rui-Qi; Xiang, Xiao-Guo; Manchester, Steven R; Lin, Li; Wang, Wei; Wen, Jun; Chen, Zhi-Duan
2013-01-01
The hickory genus (Carya) contains ca. 17 species distributed in subtropical and tropical regions of eastern Asia and subtropical to temperate regions of eastern North America. Previously, the phylogenetic relationships between eastern Asian and eastern North American species of Carya were not fully confirmed even with an extensive sampling, biogeographic and diversification patterns had thus never been investigated in a phylogenetic context. We sampled 17 species of Carya and 15 species representing all other genera of the Juglandaceae as outgroups, with eight nuclear and plastid loci to reconstruct the phylogeny of Carya. The phylogenetic positions of seven extinct genera of the Juglandaceae were inferred using morphological characters and the molecular phylogeny as a backbone constraint. Divergence times within Carya were estimated with relaxed Bayesian dating. Biogeographic analyses were performed in DIVA and LAGRANGE. Diversification rates were inferred by LASER and APE packages. Our results support two major clades within Carya, corresponding to the lineages of eastern Asia and eastern North America. The split between the two disjunct clades is estimated to be 21.58 (95% HPD 11.07-35.51) Ma. Genus-level DIVA and LAGRANGE analyses incorporating both extant and extinct genera of the Juglandaceae suggested that Carya originated in North America, and migrated to Eurasia during the early Tertiary via the North Atlantic land bridge. Fragmentation of the distribution caused by global cooling in the late Tertiary resulted in the current disjunction. The diversification rate of hickories in eastern North America appeared to be higher than that in eastern Asia, which is ascribed to greater ecological opportunities, key morphological innovations, and polyploidy.
A Novel Multi-Receiver Signcryption Scheme with Complete Anonymity.
Pang, Liaojun; Yan, Xuxia; Zhao, Huiyang; Hu, Yufei; Li, Huixian
2016-01-01
Anonymity, which is more and more important to multi-receiver schemes, has been taken into consideration by many researchers recently. To protect the receiver anonymity, in 2010, the first multi-receiver scheme based on the Lagrange interpolating polynomial was proposed. To ensure the sender's anonymity, the concept of the ring signature was proposed in 2005, but afterwards, this scheme was proven to has some weakness and at the same time, a completely anonymous multi-receiver signcryption scheme is proposed. In this completely anonymous scheme, the sender anonymity is achieved by improving the ring signature, and the receiver anonymity is achieved by also using the Lagrange interpolating polynomial. Unfortunately, the Lagrange interpolation method was proven a failure to protect the anonymity of receivers, because each authorized receiver could judge whether anyone else is authorized or not. Therefore, the completely anonymous multi-receiver signcryption mentioned above can only protect the sender anonymity. In this paper, we propose a new completely anonymous multi-receiver signcryption scheme with a new polynomial technology used to replace the Lagrange interpolating polynomial, which can mix the identity information of receivers to save it as a ciphertext element and prevent the authorized receivers from verifying others. With the receiver anonymity, the proposed scheme also owns the anonymity of the sender at the same time. Meanwhile, the decryption fairness and public verification are also provided.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Adaptive Multi-Agent Systems for Constrained Optimization
NASA Technical Reports Server (NTRS)
Macready, William; Bieniawski, Stefan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.
Subscale Fast Cookoff Testing and Modeling for the Hazard Assessment of Large Rocket Motors
2001-03-01
41 LIST OF TABLES Table 1 Heats of Vaporization Parameter for Two-liner Phase Transformation - Complete Liner Sublimation and/or Combined Liner...One-dimensional 2-D Two-dimensional ALE3D Arbitrary-Lagrange-Eulerian (3-D) Computer Code ALEGRA 3-D Arbitrary-Lagrange-Eulerian Computer Code for...case-liner bond areas and in the grain inner bore to explore the pre-ignition and ignition phases , as well as burning evolution in rocket motor fast
7 CFR 761.205 - Computing the formula allocation.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS GENERAL PROGRAM ADMINISTRATION Allocation of Farm Loan Programs... held in the National Office reserve and distributed by base and administrative allocation, multiplied... allocation−national reserve−base allocation−administrative allocation) × State Factor (b) To calculate the...
An Exposition on the Nonlinear Kinematics of Shells, Including Transverse Shearing Deformations
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.
2013-01-01
An in-depth exposition on the nonlinear deformations of shells with "small" initial geometric imperfections, is presented without the use of tensors. First, the mathematical descriptions of an undeformed-shell reference surface, and its deformed image, are given in general nonorthogonal coordinates. The two-dimensional Green-Lagrange strains of the reference surface derived and simplified for the case of "small" strains. Linearized reference-surface strains, rotations, curvatures, and torsions are then derived and used to obtain the "small" Green-Lagrange strains in terms of linear deformation measures. Next, the geometry of the deformed shell is described mathematically and the "small" three-dimensional Green-Lagrange strains are given. The deformations of the shell and its reference surface are related by introducing a kinematic hypothesis that includes transverse shearing deformations and contains the classical Love-Kirchhoff kinematic hypothesis as a proper, explicit subset. Lastly, summaries of the essential equations are given for general nonorthogonal and orthogonal coordinates, and the basis for further simplification of the equations is discussed.
Augmented Lagrange Hopfield network for solving economic dispatch problem in competitive environment
NASA Astrophysics Data System (ADS)
Vo, Dieu Ngoc; Ongsakul, Weerakorn; Nguyen, Khai Phuc
2012-11-01
This paper proposes an augmented Lagrange Hopfield network (ALHN) for solving economic dispatch (ED) problem in the competitive environment. The proposed ALHN is a continuous Hopfield network with its energy function based on augmented Lagrange function for efficiently dealing with constrained optimization problems. The ALHN method can overcome the drawbacks of the conventional Hopfield network such as local optimum, long computational time, and linear constraints. The proposed method is used for solving the ED problem with two revenue models of revenue based on payment for power delivered and payment for reserve allocated. The proposed ALHN has been tested on two systems of 3 units and 10 units for the two considered revenue models. The obtained results from the proposed methods are compared to those from differential evolution (DE) and particle swarm optimization (PSO) methods. The result comparison has indicated that the proposed method is very efficient for solving the problem. Therefore, the proposed ALHN could be a favorable tool for ED problem in the competitive environment.
Holonomicity analysis of electromechanical systems
NASA Astrophysics Data System (ADS)
Wcislik, Miroslaw; Suchenia, Karol
2017-12-01
Electromechanical systems are described using state variables that contain electrical and mechanical components. The equations of motion, both electrical and mechanical, describe the relationships between these components. These equations are obtained using Lagrange functions. On the basis of the function and Lagrange - d'Alembert equation the methodology of obtaining equations for electromechanical systems was presented, together with a discussion of the nonholonomicity of these systems. The electromechanical system in the form of a single-phase reluctance motor was used to verify the presented method. Mechanical system was built as a system, which can oscillate as the element of physical pendulum. On the base of the pendulum oscillation, parameters of the electromechanical system were defined. The identification of the motor electric parameters as a function of the rotation angle was carried out. In this paper the characteristics and motion equations parameters of the motor are presented. The parameters of the motion equations obtained from the experiment and from the second order Lagrange equations are compared.
A macroscopic plasma Lagrangian and its application to wave interactions and resonances
NASA Technical Reports Server (NTRS)
Peng, Y. K. M.
1974-01-01
The derivation of a macroscopic plasma Lagrangian is considered, along with its application to the description of nonlinear three-wave interaction in a homogeneous plasma and linear resonance oscillations in a inhomogeneous plasma. One approach to obtain the Lagrangian is via the inverse problem of the calculus of variations for arbitrary first and second order quasilinear partial differential systems. Necessary and sufficient conditions for the given equations to be Euler-Lagrange equations of a Lagrangian are obtained. These conditions are then used to determine the transformations that convert some classes of non-Euler-Lagrange equations to Euler-Lagrange equation form. The Lagrangians for a linear resistive transmission line and a linear warm collisional plasma are derived as examples. Using energy considerations, the correct macroscopic plasma Lagrangian is shown to differ from the velocity-integrated low Lagrangian by a macroscopic potential energy that equals twice the particle thermal kinetic energy plus the energy lost by heat conduction.
Coupled nonlinear aeroelasticity and flight dynamics of fully flexible aircraft
NASA Astrophysics Data System (ADS)
Su, Weihua
This dissertation introduces an approach to effectively model and analyze the coupled nonlinear aeroelasticity and flight dynamics of highly flexible aircraft. A reduced-order, nonlinear, strain-based finite element framework is used, which is capable of assessing the fundamental impact of structural nonlinear effects in preliminary vehicle design and control synthesis. The cross-sectional stiffness and inertia properties of the wings are calculated along the wing span, and then incorporated into the one-dimensional nonlinear beam formulation. Finite-state unsteady subsonic aerodynamics is used to compute airloads along lifting surfaces. Flight dynamic equations are then introduced to complete the aeroelastic/flight dynamic system equations of motion. Instead of merely considering the flexibility of the wings, the current work allows all members of the vehicle to be flexible. Due to their characteristics of being slender structures, the wings, tail, and fuselage of highly flexible aircraft can be modeled as beams undergoing three dimensional displacements and rotations. New kinematic relationships are developed to handle the split beam systems, such that fully flexible vehicles can be effectively modeled within the existing framework. Different aircraft configurations are modeled and studied, including Single-Wing, Joined-Wing, Blended-Wing-Body, and Flying-Wing configurations. The Lagrange Multiplier Method is applied to model the nodal displacement constraints at the joint locations. Based on the proposed models, roll response and stability studies are conducted on fully flexible and rigidized models. The impacts of the flexibility of different vehicle members on flutter with rigid body motion constraints, flutter in free flight condition, and roll maneuver performance are presented. Also, the static stability of the compressive member of the Joined-Wing configuration is studied. A spatially-distributed discrete gust model is incorporated into the time simulation of the framework. Gust responses of the Flying-Wing configuration subject to stall effects are investigated. A bilinear torsional stiffness model is introduced to study the skin wrinkling due to large bending curvature of the Flying-Wing. The numerical studies illustrate the improvements of the existing reduced-order formulation with new capabilities of both structural modeling and coupled aeroelastic and flight dynamic analysis of fully flexible aircraft.
NASA Astrophysics Data System (ADS)
Glascoe, L. G.; Ezzedine, S. M.; Kanarska, Y.; Lomov, I. N.; Antoun, T.; Smith, J.; Hall, R.; Woodson, S.
2014-12-01
Understanding the flow of fines, particulate sorting in porous media and fractured media during sediment transport is significant for industrial, environmental, geotechnical and petroleum technologies to name a few. For example, the safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique. The numerical code is validated to experiments conducted at the USACE and ERDC. These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. We have extended these results to more realistic heterogeneous population particulates for sediment transport. This work performed under the auspices of the US DOE by LLNL under Contract DE-AC52-07NA27344 and was sponsored by the Department of Homeland Security, Science and Technology Directorate, Homeland Security Advanced Research Projects Agency.
NASA Astrophysics Data System (ADS)
Allison, K. L.; Suckale, J.
2015-12-01
The 1959 eruption at Kilauea Iki, Hawaii, was unusually violent for a near-summit extrusion and the sequence of processes leading up to it remain debated. The eruption might have resulted from the progressive emptying of a stratified magma chamber or from a new magma batch bypassing the base of the magma storage region and mixing with the differentiated magma at shallow depth. In this study, we test if the picritic scoria erupted during the 1959 eruption can shed light on the conditions in the magmatic plumbing system prior to eruption. Scoria from the 1959 eruption contain glomeroporphyritic aggregates of olivine crystals, primarily composed of 2-4 crystals but comprising as many as 16, which vary in composition and three-dimensional texture. The clustering of crystals from different environments and their preferential alignment along crystallographic axes suggest that the glomerocrysts may be the result of synneusis - the drifting together of crystals (Schwindinger and Anderson, 1989). Analogue laboratory experiments of clay crystals in Karo syrup (Schwindinger, 1999), however, show that two crystals settling in a still liquid will not reorient themselves into alignment. Here, we test the hypothesis that a shear-dominated flow field might have facilitated the synneusis of the Kilauea olivines. We investigate the fluid-dynamical conditions under which the glomerocrysts might have formed using direct numerical simulations at the scale of individual crystals. We have implemented an iterative numerical method for simulating the hydrodynamic interactions between olivine crystals and their feedback on the flow field in a magmatic liquid. We solve the Stokes equation in the fluid phase and include rigid, rectangular bodies representing the olivine crystals through distributed Lagrange multipliers. To allow crystals to stick together after collision, the numerical method includes a multibody collision scheme. Additionally, it uses an analytical quadrature scheme instead of discretizing the solid body into material volumes, increasing accuracy and reducing computational expense. Our simulations show that the Kilauea Iki glomerocrysts formed in a magmatic liquid with very low crystallinity (likely less than about 10%) and that shear might have facilitated preferential alignment.
2013-01-01
is the derivative of the N th-order Legendre polynomial . Given these definitions, the one-dimensional Lagrange polynomials hi(ξ) are hi(ξ) = − 1 N(N...2. Detail of one interface patch in the northern hemisphere. The high-order Legendre -Gauss-Lobatto (LGL) points are added to the linear grid by...smaller ones by a Lagrange polynomial of order nI . The number of quadrilateral elements and grid points of the final grid are then given by Np = 6(N
1979-01-01
from the Bernoullis was Daniel Bernoulli’s n’est pas la meme dans tous les sens", Exercices addition of the acceleration term to the beam e- de Math...frequencies). improved during 1811-1816 by Germain and Lagrange and, finally, the correct derivation was produced 1852 G. Lame, "Leqons sur la ...de la re- tropic membranes and plates (low frequencies) sistance des solides et des solides d’egale by Euler, Jacques Bernoulli, Germin, Lagrange
The Lagrange Points in a Binary Black Hole System: Applications to Electromagnetic Signatures
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy
2010-01-01
We study the stability and evolution of the Lagrange points L_4 and L-5 in a black hole (BH) binary system, including gravitational radiation. We find that gas and stars can be shepherded in with the BH system until the final moments before merger, providing the fuel for a bright electromagnetic counterpart to a gravitational wave signal. Other astrophysical signatures include the ejection of hyper-velocity stars, gravitational collapse of globular clusters, and the periodic shift of narrow emission lines in AGN.
Euler-Lagrange formulas for pseudo-Kähler manifolds
NASA Astrophysics Data System (ADS)
Park, JeongHyeong
2016-01-01
Let c be a characteristic form of degree k which is defined on a Kähler manifold of real dimension m > 2 k. Taking the inner product with the Kähler form Ωk gives a scalar invariant which can be considered as a generalized Lovelock functional. The associated Euler-Lagrange equations are a generalized Einstein-Gauss-Bonnet gravity theory; this theory restricts to the canonical formalism if c =c2 is the second Chern form. We extend previous work studying these equations from the Kähler to the pseudo-Kähler setting.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
A Direct Method for Fuel Optimal Maneuvers of Distributed Spacecraft in Multiple Flight Regimes
NASA Technical Reports Server (NTRS)
Hughes, Steven P.; Cooley, D. S.; Guzman, Jose J.
2005-01-01
We present a method to solve the impulsive minimum fuel maneuver problem for a distributed set of spacecraft. We develop the method assuming a non-linear dynamics model and parameterize the problem to allow the method to be applicable to multiple flight regimes including low-Earth orbits, highly-elliptic orbits (HEO), Lagrange point orbits, and interplanetary trajectories. Furthermore, the approach is not limited by the inter-spacecraft separation distances and is applicable to both small formations as well as large constellations. Semianalytical derivatives are derived for the changes in the total AV with respect to changes in the independent variables. We also apply a set of constraints to ensure that the fuel expenditure is equalized over the spacecraft in formation. We conclude with several examples and present optimal maneuver sequences for both a HE0 and libration point formation.
Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.
Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo
2017-10-01
This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.
Alexander, Dayna S; Alfonso, Moya L; Cao, Chunhua
2016-12-01
Currently, public health practitioners are analyzing the role that caregivers play in childhood obesity efforts. Assessing African American caregiver's perceptions of childhood obesity in rural communities is an important prevention effort. This article's objective is to describe the development and psychometric testing of a survey tool to assess childhood obesity perceptions among African American caregivers in a rural setting, which can be used for obesity prevention program development or evaluation. The Childhood Obesity Perceptions (COP) survey was developed to reflect the multidimensional nature of childhood obesity including risk factors, health complications, weight status, built environment, and obesity prevention strategies. A 97-item survey was pretested and piloted with the priority population. After pretesting and piloting, the survey was reduced to 59-items and administered to 135 African American caregivers. An exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) was conducted to test how well the survey items represented the number of Social Cognitive Theory constructs. Twenty items were removed from the original 59-item survey and acceptable internal consistency of the six factors (α=0.70-0.85) was documented for all scales in the final COP instrument. CFA resulted in a less than adequate fit; however, a multivariate Lagrange multiplier test identified modifications to improve the model fit. The COP survey represents a promising approach as a potentially comprehensive assessment for implementation or evaluation of childhood obesity programs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Applications of a constrained mechanics methodology in economics
NASA Astrophysics Data System (ADS)
Janová, Jitka
2011-11-01
This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the undergraduate level and (ii) to enable the students to gain a deeper understanding of the principles and methods routinely used in mechanics by looking at the well-known methodology from the different perspective of economics. Two constrained dynamic economic problems are presented using the economic terminology in an intuitive way. First, the Phillips model of the business cycle is presented as a system of forced oscillations and the general problem of two interacting economies is solved by the nonholonomic dynamics approach. Second, the Cass-Koopmans-Ramsey model of economical growth is solved as a variational problem with a velocity-dependent constraint using the vakonomic approach. The specifics of the solution interpretation in economics compared to mechanics is discussed in detail, a discussion of the nonholonomic and vakonomic approaches to constrained problems in mechanics and economics is provided and an economic interpretation of the Lagrange multipliers (possibly surprising for the students of physics) is carefully explained. This paper can be used by the undergraduate students of physics interested in interdisciplinary physics applications to gain an understanding of the current scientific approach to economics based on a physical background, or by university teachers as an attractive supplement to classical mechanics lessons.
FV-MHMM: A Discussion on Weighting Schemes.
NASA Astrophysics Data System (ADS)
Franc, J.; Gerald, D.; Jeannin, L.; Egermann, P.; Masson, R.
2016-12-01
Upscaling or homogenization techniques consist in finding block-equivalentor equivalent upscaled properties on a coarse grid from heterogeneousproperties defined on an underlying fine grid. However, this couldbecome costly and resource consuming. Harder et al., 2013, have developeda Multiscale Hybrid-Mixed Method (MHMM) of upscaling to treat Darcytype equations on heterogeneous fields formulated using a finite elementmethod. Recently, Franc et al. 2016, has extended this method of upscalingto finite volume formulation (FV-MHMM). Although convergence refiningLagrange multipliers space has been observed, numerical artefactscan occur while trapping numerically the flow in regions of low permeability. This work will present the development of the method along with theresults obtained from its classical formulation. Then, two weightingschemes and their benefits on the FV-MHMM method will be presented insome simple random permeability cases. Next example will involve alarger heterogeneous 2D permeability field extracted from the 10thSPE test case. Eventually, multiphase flow will be addressed asan extension of this single phase flow method. An elliptic pressureequation solved on the coarse grid via FV-MHMM will be sequentiallycoupled with a hyperbolic saturation equation on the fine grid. Theimproved accuracy thanks to the weighting scheme will be measuredcompared to a finite volume fine grid solution. References: Harder, C., Paredes, D. and Valentin, F., A family of multiscalehybrid-mixed finite element methods for the Darcy equation with roughcoefficients, Journal of Computational Physics, 2013. Franc J., Debenest G., Jeannin L., Egermann P. and Masson R., FV-MHMMfor reservoir modelling ECMOR XV-15th European Conference on the Mathematicsof Oil Recovery, 2015.
Finite Element Method-Based Kinematics and Closed-Loop Control of Soft, Continuum Manipulators.
Bieze, Thor Morales; Largilliere, Frederick; Kruszewski, Alexandre; Zhang, Zhongkai; Merzouki, Rochdi; Duriez, Christian
2018-06-01
This article presents a modeling methodology and experimental validation for soft manipulators to obtain forward kinematic model (FKM) and inverse kinematic model (IKM) under quasi-static conditions (in the literature, these manipulators are usually classified as continuum robots. However, their main characteristic of interest in this article is that they create motion by deformation, as opposed to the classical use of articulations). It offers a way to obtain the kinematic characteristics of this type of soft robots that is suitable for offline path planning and position control. The modeling methodology presented relies on continuum mechanics, which does not provide analytic solutions in the general case. Our approach proposes a real-time numerical integration strategy based on finite element method with a numerical optimization based on Lagrange multipliers to obtain FKM and IKM. To reduce the dimension of the problem, at each step, a projection of the model to the constraint space (gathering actuators, sensors, and end-effector) is performed to obtain the smallest number possible of mathematical equations to be solved. This methodology is applied to obtain the kinematics of two different manipulators with complex structural geometry. An experimental comparison is also performed in one of the robots, between two other geometric approaches and the approach that is showcased in this article. A closed-loop controller based on a state estimator is proposed. The controller is experimentally validated and its robustness is evaluated using Lypunov stability method.
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
Non Abelian T-duality in Gauged Linear Sigma Models
NASA Astrophysics Data System (ADS)
Bizet, Nana Cabo; Martínez-Merino, Aldo; Zayas, Leopoldo A. Pando; Santos-Silva, Roberto
2018-04-01
Abelian T-duality in Gauged Linear Sigma Models (GLSM) forms the basis of the physical understanding of Mirror Symmetry as presented by Hori and Vafa. We consider an alternative formulation of Abelian T-duality on GLSM's as a gauging of a global U(1) symmetry with the addition of appropriate Lagrange multipliers. For GLSMs with Abelian gauge groups and without superpotential we reproduce the dual models introduced by Hori and Vafa. We extend the construction to formulate non-Abelian T-duality on GLSMs with global non-Abelian symmetries. The equations of motion that lead to the dual model are obtained for a general group, they depend in general on semi-chiral superfields; for cases such as SU(2) they depend on twisted chiral superfields. We solve the equations of motion for an SU(2) gauged group with a choice of a particular Lie algebra direction of the vector superfield. This direction covers a non-Abelian sector that can be described by a family of Abelian dualities. The dual model Lagrangian depends on twisted chiral superfields and a twisted superpotential is generated. We explore some non-perturbative aspects by making an Ansatz for the instanton corrections in the dual theories. We verify that the effective potential for the U(1) field strength in a fixed configuration on the original theory matches the one of the dual theory. Imposing restrictions on the vector superfield, more general non-Abelian dual models are obtained. We analyze the dual models via the geometry of their susy vacua.
Spatial analysis of highway incident durations in the context of Hurricane Sandy.
Xie, Kun; Ozbay, Kaan; Yang, Hong
2015-01-01
The objectives of this study are (1) to develop an incident duration model which can account for the spatial dependence of duration observations, and (2) to investigate the impacts of a hurricane on incident duration. Highway incident data from New York City and its surrounding regions before and after Hurricane Sandy was used for the study. Moran's I statistics confirmed that durations of the neighboring incidents were spatially correlated. Moreover, Lagrange Multiplier tests suggested that the spatial dependence should be captured in a spatial lag specification. A spatial error model, a spatial lag model and a standard model without consideration of spatial effects were developed. The spatial lag model is found to outperform the others by capturing the spatial dependence of incident durations via a spatially lagged dependent variable. It was further used to assess the effects of hurricane-related variables on incident duration. The results show that the incidents during and post the hurricane are expected to have 116.3% and 79.8% longer durations than those that occurred in the regular time. However, no significant increase in incident duration is observed in the evacuation period before Sandy's landfall. Results of temporal stability tests further confirm the existence of the significant changes in incident duration patterns during and post the hurricane. Those findings can provide insights to aid in the development of hurricane evacuation plans and emergency management strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Inversion of geophysical potential field data using the finite element method
NASA Astrophysics Data System (ADS)
Lamichhane, Bishnu P.; Gross, Lutz
2017-12-01
The inversion of geophysical potential field data can be formulated as an optimization problem with a constraint in the form of a partial differential equation (PDE). It is common practice, if possible, to provide an analytical solution for the forward problem and to reduce the problem to a finite dimensional optimization problem. In an alternative approach the optimization is applied to the problem and the resulting continuous problem which is defined by a set of coupled PDEs is subsequently solved using a standard PDE discretization method, such as the finite element method (FEM). In this paper, we show that under very mild conditions on the data misfit functional and the forward problem in the three-dimensional space, the continuous optimization problem and its FEM discretization are well-posed including the existence and uniqueness of respective solutions. We provide error estimates for the FEM solution. A main result of the paper is that the FEM spaces used for the forward problem and the Lagrange multiplier need to be identical but can be chosen independently from the FEM space used to represent the unknown physical property. We will demonstrate the convergence of the solution approximations in a numerical example. The second numerical example which investigates the selection of FEM spaces, shows that from the perspective of computational efficiency one should use 2 to 4 times finer mesh for the forward problem in comparison to the mesh of the physical property.
Self-Taught Low-Rank Coding for Visual Learning.
Li, Sheng; Li, Kang; Fu, Yun
2018-03-01
The lack of labeled data presents a common challenge in many computer vision and machine learning tasks. Semisupervised learning and transfer learning methods have been developed to tackle this challenge by utilizing auxiliary samples from the same domain or from a different domain, respectively. Self-taught learning, which is a special type of transfer learning, has fewer restrictions on the choice of auxiliary data. It has shown promising performance in visual learning. However, existing self-taught learning methods usually ignore the structure information in data. In this paper, we focus on building a self-taught coding framework, which can effectively utilize the rich low-level pattern information abstracted from the auxiliary domain, in order to characterize the high-level structural information in the target domain. By leveraging a high quality dictionary learned across auxiliary and target domains, the proposed approach learns expressive codings for the samples in the target domain. Since many types of visual data have been proven to contain subspace structures, a low-rank constraint is introduced into the coding objective to better characterize the structure of the given target set. The proposed representation learning framework is called self-taught low-rank (S-Low) coding, which can be formulated as a nonconvex rank-minimization and dictionary learning problem. We devise an efficient majorization-minimization augmented Lagrange multiplier algorithm to solve it. Based on the proposed S-Low coding mechanism, both unsupervised and supervised visual learning algorithms are derived. Extensive experiments on five benchmark data sets demonstrate the effectiveness of our approach.
NASA Astrophysics Data System (ADS)
Fouquet, Thierry N. J.; Cody, Robert B.; Ozeki, Yuka; Kitagawa, Shinya; Ohtani, Hajime; Sato, Hiroaki
2018-05-01
The Kendrick mass defect (KMD) analysis of multiply charged polymeric distributions has recently revealed a surprising isotopic split in their KMD plots—namely a 1/z difference between KMDs of isotopes of an oligomer at charge state z. Relying on the KMD analysis of actual and simulated distributions of poly(ethylene oxide) (PEO), the isotopic split is mathematically accounted for and found to go with an isotopic misalignment in certain cases. It is demonstrated that the divisibility (resp. indivisibility) of the nominal mass of the repeating unit (R) by z is the condition for homolog ions to line up horizontally (resp. misaligned obliquely) in a KMD plot. Computing KMDs using a fractional base unit R/z eventually corrects the misalignments for the associated charge state while using the least common multiple of all the charge states as the divisor realigns all the points at once. The isotopic split itself can be removed by using either a new charge-dependent KMD plot compatible with any fractional base unit or the remainders of KM (RKM) recently developed for low-resolution data all found to be linked in a unified theory. These original applications of the fractional base units and the RKM plots are of importance theoretically to satisfy the basics of a mass defect analysis and practically for a correct data handling of single stage and tandem mass spectra of multiply charged homo- and copolymers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vignat, C.; Bercher, J.-F.
The family of Tsallis entropies was introduced by Tsallis in 1988. The Shannon entropy belongs to this family as the limit case q{yields}1. The canonical distributions in R{sup n} that maximize this entropy under a covariance constraint are easily derived as Student-t (q<1) and Student-r (q>1) multivariate distributions. A nice geometrical result about these Student-r distributions is that they are marginal of uniform distributions on a sphere of larger dimension d with the relationship p = n+2+(2/q-1). As q{yields}1, we recover the famous Poincare's observation according to which a Gaussian vector can be viewed as the projection of a vectormore » uniformly distributed on the infinite dimensional sphere. A related property in the case q<1 is also available. Often associated to Renyi-Tsallis entropies is the notion of escort distributions. We provide here a geometric interpretation of these distributions. Another result concerns a universal system in physics, the harmonic oscillator: in the usual quantum context, the waveform of the n-th state of the harmonic oscillator is a Gaussian waveform multiplied by the degree n Hermite polynomial. We show, starting from recent results by Carinena et al., that the quantum harmonic oscillator on spaces with constant curvature is described by maximal Tsallis entropy waveforms multiplied by the extended Hermite polynomials derived from this measure. This gives a neat interpretation of the non-extensive parameter q in terms of the curvature of the space the oscillator evolves on; as q{yields}1, the curvature of the space goes to 0 and we recover the classical harmonic oscillator in R{sup 3}.« less
Distributed Constrained Optimization with Semicoordinate Transformations
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2006-01-01
Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.
Compressible cavitation with stochastic field method
NASA Astrophysics Data System (ADS)
Class, Andreas; Dumond, Julien
2012-11-01
Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.
NASA Astrophysics Data System (ADS)
Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.
2018-05-01
In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.
NASA Technical Reports Server (NTRS)
Scharf, Daniel P.; Hadaegh, Fred Y.; Rahman, Zahidul H.; Shields, Joel F.; Singh, Gurkipal; Wette, Matthew R.
2004-01-01
The Terrestrial Planet Finder formation flying Interferometer (TPF-I) will be a five-spacecraft, precision formation operating near the second Sun-Earth Lagrange point. As part of technology development for TPF-I, a formation and attitude control system (FACS) is being developed that achieves the precision and functionality needed for the TPF-I formation and that will be demonstrated in a distributed, real-time simulation environment. In this paper we present an overview of FACS and discuss in detail its formation estimation, guidance and control architectures and algorithms. Since FACS is currently being integrated into a high-fidelity simulation environment, component simulations demonstrating algorithm performance are presented.
NASA Technical Reports Server (NTRS)
Scharf, Daniel P.; Hadaegh, Fred Y.; Rahman, Zahidul H.; Shields, Joel F.; Singh, Gurkipal
2004-01-01
The Terrestrial Planet Finder formation flying Interferometer (TPF-I) will be a five-spacecraft, precision formation operating near a Sun-Earth Lagrange point. As part of technology development for TPF-I, a formation and attitude control system (FACS) is being developed that achieves the precision and functionality associated with the TPF-I formation. This FACS will be demonstrated in a distributed, real-time simulation environment. In this paper we present an overview of the FACS and discuss in detail its constituent formation estimation, guidance and control architectures and algorithms. Since the FACS is currently being integrated into a high-fidelity simulation environment, component simulations demonstrating algorithm performance are presented.
25 CFR Appendix B to Subpart C - Population Adjustment Factor
Code of Federal Regulations, 2013 CFR
2013-04-01
... the amount available to the first population range. ** The number of tribes changes yearly. *** The..., multiply the Distribution Factor by the total number of tribes identified in the population range to... Population Range 1 . . . 5 = Population Ranges 1 through 5 Nn = Number of tribes in the nth Population Range...
25 CFR Appendix B to Subpart C - Population Adjustment Factor
Code of Federal Regulations, 2014 CFR
2014-04-01
... the amount available to the first population range. ** The number of tribes changes yearly. *** The..., multiply the Distribution Factor by the total number of tribes identified in the population range to... Population Range 1 . . . 5 = Population Ranges 1 through 5 Nn = Number of tribes in the nth Population Range...
25 CFR Appendix B to Subpart C - Population Adjustment Factor
Code of Federal Regulations, 2012 CFR
2012-04-01
... the amount available to the first population range. ** The number of tribes changes yearly. *** The..., multiply the Distribution Factor by the total number of tribes identified in the population range to... Population Range 1 . . . 5 = Population Ranges 1 through 5 Nn = Number of tribes in the nth Population Range...
25 CFR Appendix B to Subpart C - Population Adjustment Factor
Code of Federal Regulations, 2011 CFR
2011-04-01
... the amount available to the first population range. ** The number of tribes changes yearly. *** The..., multiply the Distribution Factor by the total number of tribes identified in the population range to... Population Range 1 . . . 5 = Population Ranges 1 through 5 Nn = Number of tribes in the nth Population Range...
NASA Astrophysics Data System (ADS)
Rahimi, Esmaeil; Oraee, Kazem; Shafahi, Zia Aldin; Ghasemzadeh, Hasan
2015-03-01
Optimum cut-off grades determination in mining life affects production planning and ultimate pit limit and it is also important from social, economical and environmental aspects. Calculation of optimum cut-off grades has been less considered for mines containing various mineral processing methods. In this paper, an optimization technique is applied to obtain optimum cut-off grades for both concentration and heap leaching processes. In this technique, production costs and different recoveries of heap leaching method directed into modeling different annual cash flows in copper mines. Considering the governing constraints, the Lagrange multiplier method is practiced to optimize the cut-off grades in which the objective function is supposed to maximize Net Present Value. The results indicate the effect of heap leaching process on the optimum cut-off grades of primary and secondary sulfide deposits.
Modelling of charged satellite motion in Earth's gravitational and magnetic fields
NASA Astrophysics Data System (ADS)
Abd El-Bar, S. E.; Abd El-Salam, F. A.
2018-05-01
In this work Lagrange's planetary equations for a charged satellite subjected to the Earth's gravitational and magnetic force fields are solved. The Earth's gravity, and magnetic and electric force components are obtained and expressed in terms of orbital elements. The variational equations of orbit with the considered model in Keplerian elements are derived. The solution of the problem in a fully analytical way is obtained. The temporal rate of changes of the orbital elements of the spacecraft are integrated via Lagrange's planetary equations and integrals of the normalized Keplerian motion obtained by Ahmed (Astron. J. 107(5):1900, 1994).
NASA Astrophysics Data System (ADS)
Petrie, Gordon; Pevtsov, Alexei; Schwarz, Andrew; DeRosa, Marc
2018-06-01
The solar photospheric magnetic flux distribution is key to structuring the global solar corona and heliosphere. Regular full-disk photospheric magnetogram data are therefore essential to our ability to model and forecast heliospheric phenomena such as space weather. However, our spatio-temporal coverage of the photospheric field is currently limited by our single vantage point at/near Earth. In particular, the polar fields play a leading role in structuring the large-scale corona and heliosphere, but each pole is unobservable for {>} 6 months per year. Here we model the possible effect of full-disk magnetogram data from the Lagrange points L4 and L5, each extending longitude coverage by 60°. Adding data also from the more distant point L3 extends the longitudinal coverage much further. The additional vantage points also improve the visibility of the globally influential polar fields. Using a flux-transport model for the solar photospheric field, we model full-disk observations from Earth/L1, L3, L4, and L5 over a solar cycle, construct synoptic maps using a novel weighting scheme adapted for merging magnetogram data from multiple viewpoints, and compute potential-field models for the global coronal field. Each additional viewpoint brings the maps and models into closer agreement with the reference field from the flux-transport simulation, with particular improvement at polar latitudes, the main source of the fast solar wind.
Phylogenetics, biogeography, and staminal evolution in the tribe Mentheae (Lamiaceae).
Drew, Bryan T; Sytsma, Kenneth J
2012-05-01
The mint family (Lamiaceae) is the sixth largest family of flowering plants, with the tribe Mentheae containing about a third of the species. We present a detailed perspective on the evolution of the tribe Mentheae based on a phylogenetic analysis of cpDNA and nrDNA that is the most comprehensive to date, a biogeographic set of analyses using a fossil-calibrated chronogram, and an examination of staminal evolution. Data from four cpDNA and two nrDNA markers representing all extant genera within the tribe Mentheae were analyzed using the programs BEAST, Lagrange, S-DIVA, and BayesTraits. BEAST was used to simultaneously estimate phylogeny and divergence times, Lagrange and S-DIVA were used for biogeographical reconstruction, and BayesTraits was used to infer staminal evolution within the tribe. Currently accepted subtribal delimitations are shown to be invalid and are updated. The Mentheae and all five of its subtribes have a Mediterranean origin and have dispersed to the New World multiple times. The vast majority of New World species of subtribe Menthinae are the product of a single dispersal event in the mid-late Miocene. At least four transitions from four stamens to two stamens have occurred within Mentheae, once in the subtribe Salviinae, once in the subtribe Lycopinae, and at least twice in the subtribe Menthinae. Worldwide cooling trends probably played a large role in the diversification and present day distribution of the tribe Mentheae. Additional work is needed to ascertain relationships within some Mentheae genera, especially in the subtribe Menthinae.
Sharing out NASA's spoils. [economic benefits of U.S. space program
NASA Technical Reports Server (NTRS)
Bezdek, Roger H.; Wendling, Robert M.
1992-01-01
The economic benefits of NASA programs are discussed. Emphasis is given to an analysis of indirect economic benefits which estimates the effect of NASA programs on employment, personal income, corporate sales and profits, and government tax revenues in the U.S. and in each state. Data are presented that show that NASA programs have widely varying multipliers by industry and that illustrate the distribution of jobs by industry as well as the distribution of sales.
Investigating Trojan Asteroids at the L4/L5 Sun-Earth Lagrange Points
NASA Technical Reports Server (NTRS)
John, K. K.; Graham, L. D.; Abell, P. A.
2015-01-01
Investigations of Earth's Trojan asteroids will have benefits for science, exploration, and resource utilization. By sending a small spacecraft to the Sun-Earth L4 or L5 Lagrange points to investigate near-Earth objects, Earth's Trojan population can be better understood. This could lead to future missions for larger precursor spacecraft as well as human missions. The presence of objects in the Sun-Earth L4 and L5 Lagrange points has long been suspected, and in 2010 NASA's Wide-field Infrared Survey Explorer (WISE) detected a 300 m object. To investigate these Earth Trojan asteroid objects, it is both essential and feasible to send spacecraft to these regions. By exploring a wide field area, a small spacecraft equipped with an IR camera could hunt for Trojan asteroids and other Earth co-orbiting objects at the L4 or L5 Lagrange points in the near-term. By surveying the region, a zeroth-order approximation of the number of objects could be obtained with some rough constraints on their diameters, which may lead to the identification of potential candidates for further study. This would serve as a precursor for additional future robotic and human exploration targets. Depending on the inclination of these potential objects, they could be used as proving areas for future missions in the sense that the delta-V's to get to these targets are relatively low as compared to other rendezvous missions. They can serve as platforms for extended operations in deep space while interacting with a natural object in microgravity. Theoretically, such low inclination Earth Trojan asteroids exist. By sending a spacecraft to L4 or L5, these likely and potentially accessible targets could be identified.
7 CFR 1463.105 - Base quota levels for eligible quota holders.
Code of Federal Regulations, 2013 CFR
2013-01-01
... the BQL adjustment factor 1.071295. (Note: The factor adjusts the 2004 basic quota to the 2002 basic... farm. (Note: In the case of undivided tract ownership, BQL must be distributed among the tract quota... basic quota, multiplied by the BQL adjustment factor 1.23457. (Note: The factor adjusts the 2004 basic...
7 CFR 1463.105 - Base quota levels for eligible quota holders.
Code of Federal Regulations, 2014 CFR
2014-01-01
... the BQL adjustment factor 1.071295. (Note: The factor adjusts the 2004 basic quota to the 2002 basic... farm. (Note: In the case of undivided tract ownership, BQL must be distributed among the tract quota... basic quota, multiplied by the BQL adjustment factor 1.23457. (Note: The factor adjusts the 2004 basic...
7 CFR 993.58 - Deferment of time for withholding.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., and shall be in an amount computed by multiplying the pounds of natural condition prunes for which... shall be used by the committee to purchase from handlers a quantity of natural condition prunes, up to..., with reserve pool funds for distribution to equity holders. (3) If for any reason the committee is...
7 CFR 993.58 - Deferment of time for withholding.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., and shall be in an amount computed by multiplying the pounds of natural condition prunes for which... shall be used by the committee to purchase from handlers a quantity of natural condition prunes, up to..., with reserve pool funds for distribution to equity holders. (3) If for any reason the committee is...
Finding a larger newsletter audience. For New York Methodist, new approach is worthwhile.
Botvin, J D
2001-01-01
New York Methodist Hospital met the challenges of intense competition in the Brooklyn, N.Y., market by switching its quarterly newsletter to an insert in the local editions of national publications. By so doing, it multiplied distribution tenfold, improved its credibility and freed up staff time for other projects.
ERIC Educational Resources Information Center
Gsponer, Andre
2009-01-01
The objective of this introduction to Colombeau algebras of generalized functions (in which distributions can be freely multiplied) is to explain in elementary terms the essential concepts necessary for their application to basic nonlinear problems in classical physics. Examples are given in hydrodynamics and electrodynamics. The problem of the…
Lee, L.; Helsel, D.
2005-01-01
Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.
Thomas, R.E.
1959-08-25
An electronic multiplier circuit is described in which an output voltage having an amplitude proportional to the product or quotient of the input signals is accomplished in a novel manner which facilitates simplicity of circuit construction and a high degree of accuracy in accomplishing the multiplying and dividing function. The circuit broadly comprises a multiplier tube in which the plate current is proportional to the voltage applied to a first control grid multiplied by the difference between voltage applied to a second control grid and the voltage applied to the first control grid. Means are provided to apply a first signal to be multiplied to the first control grid together with means for applying the sum of the first signal to be multiplied and a second signal to be multiplied to the second control grid whereby the plate current of the multiplier tube is proportional to the product of the first and second signals to be multiplied.
Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights
NASA Astrophysics Data System (ADS)
Kwon, K. H.; Lee, D. W.
2001-08-01
Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.
Using multi-dimensional Smolyak interpolation to make a sum-of-products potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca
2015-07-28
We propose a new method for obtaining potential energy surfaces in sum-of-products (SOP) form. If the number of terms is small enough, a SOP potential surface significantly reduces the cost of quantum dynamics calculations by obviating the need to do multidimensional integrals by quadrature. The method is based on a Smolyak interpolation technique and uses polynomial-like or spectral basis functions and 1D Lagrange-type functions. When written in terms of the basis functions from which the Lagrange-type functions are built, the Smolyak interpolant has only a modest number of terms. The ideas are tested for HONO (nitrous acid)
Prediction of a Densely Loaded Particle-Laden Jet using a Euler-Lagrange Dense Spray Model
NASA Astrophysics Data System (ADS)
Pakseresht, Pedram; Apte, Sourabh V.
2017-11-01
Modeling of a dense spray regime using an Euler-Lagrange discrete-element approach is challenging because of local high volume loading. A subgrid cluster of droplets can lead to locally high void fractions for the disperse phase. Under these conditions, spatio-temporal changes in the carrier phase volume fractions, which are commonly neglected in spray simulations in an Euler-Lagrange two-way coupling model, could become important. Accounting for the carrier phase volume fraction variations, leads to zero-Mach number, variable density governing equations. Using pressure-based solvers, this gives rise to a source term in the pressure Poisson equation and a non-divergence free velocity field. To test the validity and predictive capability of such an approach, a round jet laden with solid particles is investigated using Direct Numerical Simulation and compared with available experimental data for different loadings. Various volume fractions spanning from dilute to dense regimes are investigated with and without taking into account the volume displacement effects. The predictions of the two approaches are compared and analyzed to investigate the effectiveness of the dense spray model. Financial support was provided by National Aeronautics and Space Administration (NASA).
Phase-plane analysis to an “anisotropic” higher-order traffic flow model
NASA Astrophysics Data System (ADS)
Wu, Chun-Xiu
2018-04-01
The qualitative theory of differential equations is applied to investigate the traveling wave solution to an “anisotropic” higher-order viscous traffic flow model under the Lagrange coordinate system. The types and stabilities of the equilibrium points are discussed in the phase plane. Through the numerical simulation, the overall distribution structures of trajectories are drawn to analyze the relation between the phase diagram and the selected conservative solution variables, and the influences of the parameters on the system are studied. The limit-circle, limit circle-spiral point, saddle-spiral point and saddle-nodal point solutions are obtained. These steady-state solutions provide good explanation for the phenomena of the oscillatory and homogeneous congestions in real-world traffic.
Incorporating rainfall uncertainty in a SWAT model: the river Zenne basin (Belgium) case study
NASA Astrophysics Data System (ADS)
Tolessa Leta, Olkeba; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2013-04-01
The European Union Water Framework Directive (EU-WFD) called its member countries to achieve a good ecological status for all inland and coastal water bodies by 2015. According to recent studies, the river Zenne (Belgium) is far from this objective. Therefore, an interuniversity and multidisciplinary project "Towards a Good Ecological Status in the river Zenne (GESZ)" was launched to evaluate the effects of wastewater management plans on the river. In this project, different models have been developed and integrated using the Open Modelling Interface (OpenMI). The hydrologic, semi-distributed Soil and Water Assessment Tool (SWAT) is hereby used as one of the model components in the integrated modelling chain in order to model the upland catchment processes. The assessment of the uncertainty of SWAT is an essential aspect of the decision making process, in order to design robust management strategies that take the predicted uncertainties into account. Model uncertainty stems from the uncertainties on the model parameters, the input data (e.g, rainfall), the calibration data (e.g., stream flows) and on the model structure itself. The objective of this paper is to assess the first three sources of uncertainty in a SWAT model of the river Zenne basin. For the assessment of rainfall measurement uncertainty, first, we identified independent rainfall periods, based on the daily precipitation and stream flow observations and using the Water Engineering Time Series PROcessing tool (WETSPRO). Secondly, we assigned a rainfall multiplier parameter for each of the independent rainfall periods, which serves as a multiplicative input error corruption. Finally, we treated these multipliers as latent parameters in the model optimization and uncertainty analysis (UA). For parameter uncertainty assessment, due to the high number of parameters of the SWAT model, first, we screened out its most sensitive parameters using the Latin Hypercube One-factor-At-a-Time (LH-OAT) technique. Subsequently, we only considered the most sensitive parameters for parameter optimization and UA. To explicitly account for the stream flow uncertainty, we assumed that the stream flow measurement error increases linearly with the stream flow value. To assess the uncertainty and infer posterior distributions of the parameters, we used a Markov Chain Monte Carlo (MCMC) sampler - differential evolution adaptive metropolis (DREAM) that uses sampling from an archive of past states to generate candidate points in each individual chain. It is shown that the marginal posterior distributions of the rainfall multipliers vary widely between individual events, as a consequence of rainfall measurement errors and the spatial variability of the rain. Only few of the rainfall events are well defined. The marginal posterior distributions of the SWAT model parameter values are well defined and identified by DREAM, within their prior ranges. The posterior distributions of output uncertainty parameter values also show that the stream flow data is highly uncertain. The approach of using rainfall multipliers to treat rainfall uncertainty for a complex model has an impact on the model parameter marginal posterior distributions and on the model results Corresponding author: Tel.: +32 (0)2629 3027; fax: +32(0)2629 3022. E-mail: otolessa@vub.ac.be
Estimating random errors due to shot noise in backscatter lidar observations.
Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang
2006-06-20
We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment.
Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations
NASA Technical Reports Server (NTRS)
Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang
2006-01-01
In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:
A Highly Linear and Wide Input Range Four-Quadrant CMOS Analog Multiplier Using Active Feedback
NASA Astrophysics Data System (ADS)
Huang, Zhangcai; Jiang, Minglu; Inoue, Yasuaki
Analog multipliers are one of the most important building blocks in analog signal processing circuits. The performance with high linearity and wide input range is usually required for analog four-quadrant multipliers in most applications. Therefore, a highly linear and wide input range four-quadrant CMOS analog multiplier using active feedback is proposed in this paper. Firstly, a novel configuration of four-quadrant multiplier cell is presented. Its input dynamic range and linearity are improved significantly by adding two resistors compared with the conventional structure. Then based on the proposed multiplier cell configuration, a four-quadrant CMOS analog multiplier with active feedback technique is implemented by two operational amplifiers. Because of both the proposed multiplier cell and active feedback technique, the proposed multiplier achieves a much wider input range with higher linearity than conventional structures. The proposed multiplier was fabricated by a 0.6µm CMOS process. Experimental results show that the input range of the proposed multiplier can be up to 5.6Vpp with 0.159% linearity error on VX and 4.8Vpp with 0.51% linearity error on VY for ±2.5V power supply voltages, respectively.
The rates and time-delay distribution of multiply imaged supernovae behind lensing clusters
NASA Astrophysics Data System (ADS)
Li, Xue; Hjorth, Jens; Richard, Johan
2012-11-01
Time delays of gravitationally lensed sources can be used to constrain the mass model of a deflector and determine cosmological parameters. We here present an analysis of the time-delay distribution of multiply imaged sources behind 17 strong lensing galaxy clusters with well-calibrated mass models. We find that for time delays less than 1000 days, at z = 3.0, their logarithmic probability distribution functions are well represented by P(log Δt) = 5.3 × 10-4Δttilde beta/M2502tilde beta, with tilde beta = 0.77, where M250 is the projected cluster mass inside 250 kpc (in 1014M⊙), and tilde beta is the power-law slope of the distribution. The resultant probability distribution function enables us to estimate the time-delay distribution in a lensing cluster of known mass. For a cluster with M250 = 2 × 1014M⊙, the fraction of time delays less than 1000 days is approximately 3%. Taking Abell 1689 as an example, its dark halo and brightest galaxies, with central velocity dispersions σ>=500kms-1, mainly produce large time delays, while galaxy-scale mass clumps are responsible for generating smaller time delays. We estimate the probability of observing multiple images of a supernova in the known images of Abell 1689. A two-component model of estimating the supernova rate is applied in this work. For a magnitude threshold of mAB = 26.5, the yearly rate of Type Ia (core-collapse) supernovae with time delays less than 1000 days is 0.004±0.002 (0.029±0.001). If the magnitude threshold is lowered to mAB ~ 27.0, the rate of core-collapse supernovae suitable for time delay observation is 0.044±0.015 per year.
The Multiscale Robin Coupled Method for flows in porous media
NASA Astrophysics Data System (ADS)
Guiraldello, Rafael T.; Ausas, Roberto F.; Sousa, Fabricio S.; Pereira, Felipe; Buscaglia, Gustavo C.
2018-02-01
A multiscale mixed method aiming at the accurate approximation of velocity and pressure fields in heterogeneous porous media is proposed. The procedure is based on a new domain decomposition method in which the local problems are subject to Robin boundary conditions. The domain decomposition procedure is defined in terms of two independent spaces on the skeleton of the decomposition, corresponding to interface pressures and fluxes, that can be chosen with great flexibility to accommodate local features of the underlying permeability fields. The well-posedness of the new domain decomposition procedure is established and its connection with the method of Douglas et al. (1993) [12], is identified, also allowing us to reinterpret the known procedure as an optimized Schwarz (or Two-Lagrange-Multiplier) method. The multiscale property of the new domain decomposition method is indicated, and its relation with the Multiscale Mortar Mixed Finite Element Method (MMMFEM) and the Multiscale Hybrid-Mixed (MHM) Finite Element Method is discussed. Numerical simulations are presented aiming at illustrating several features of the new method. Initially we illustrate the possibility of switching from MMMFEM to MHM by suitably varying the Robin condition parameter in the new multiscale method. Then we turn our attention to realistic flows in high-contrast, channelized porous formations. We show that for a range of values of the Robin condition parameter our method provides better approximations for pressure and velocity than those computed with either the MMMFEM and the MHM. This is an indication that our method has the potential to produce more accurate velocity fields in the presence of rough, realistic permeability fields of petroleum reservoirs.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).
Stoy, Paul C; Quaife, Tristan
2015-01-01
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.
Non Abelian T-duality in Gauged Linear Sigma Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bizet, Nana Cabo; Martínez-Merino, Aldo; Zayas, Leopoldo A. Pando
Abelian T-duality in Gauged Linear Sigma Models (GLSM) forms the basis of the physical understanding of Mirror Symmetry as presented by Hori and Vafa. We consider an alternative formulation of Abelian T-duality on GLSM’s as a gauging of a global U(1) symmetry with the addition of appropriate Lagrange multipliers. For GLSMs with Abelian gauge groups and without superpotential we reproduce the dual models introduced by Hori and Vafa. We extend the construction to formulate non-Abelian T-duality on GLSMs with global non-Abelian symmetries. The equations of motion that lead to the dual model are obtained for a general group, they dependmore » in general on semi-chiral superfields; for cases such as SU(2) they depend on twisted chiral superfields. We solve the equations of motion for an SU(2) gauged group with a choice of a particular Lie algebra direction of the vector superfield. This direction covers a non-Abelian sector that can be described by a family of Abelian dualities. The dual model Lagrangian depends on twisted chiral superfields and a twisted superpotential is generated. We explore some non-perturbative aspects by making an Ansatz for the instanton corrections in the dual theories. We verify that the effective potential for the U(1) field strength in a fixed configuration on the original theory matches the one of the dual theory. Imposing restrictions on the vector superfield, more general non-Abelian dual models are obtained. We analyze the dual models via the geometry of their susy vacua.« less
NASA Astrophysics Data System (ADS)
Kotchasarn, Chirawat; Saengudomlert, Poompat
We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.
Stoy, Paul C.; Quaife, Tristan
2015-01-01
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes. PMID:26067835
A posteriori model validation for the temporal order of directed functional connectivity maps.
Beltz, Adriene M; Molenaar, Peter C M
2015-01-01
A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).
NASA Technical Reports Server (NTRS)
Gerhard, Craig Steven; Gurdal, Zafer; Kapania, Rakesh K.
1996-01-01
Layerwise finite element analyses of geodesically stiffened cylindrical shells are presented. The layerwise laminate theory of Reddy (LWTR) is developed and adapted to circular cylindrical shells. The Ritz variational method is used to develop an analytical approach for studying the buckling of simply supported geodesically stiffened shells with discrete stiffeners. This method utilizes a Lagrange multiplier technique to attach the stiffeners to the shell. The development of the layerwise shells couples a one-dimensional finite element through the thickness with a Navier solution that satisfies the boundary conditions. The buckling results from the Ritz discrete analytical method are compared with smeared buckling results and with NASA Testbed finite element results. The development of layerwise shell and beam finite elements is presented and these elements are used to perform the displacement field, stress, and first-ply failure analyses. The layerwise shell elements are used to model the shell skin and the layerwise beam elements are used to model the stiffeners. This arrangement allows the beam stiffeners to be assembled directly into the global stiffness matrix. A series of analytical studies are made to compare the response of geodesically stiffened shells as a function of loading, shell geometry, shell radii, shell laminate thickness, stiffener height, and geometric nonlinearity. Comparisons of the structural response of geodesically stiffened shells, axial and ring stiffened shells, and unstiffened shells are provided. In addition, interlaminar stress results near the stiffener intersection are presented. First-ply failure analyses for geodesically stiffened shells utilizing the Tsai-Wu failure criterion are presented for a few selected cases.
Transient response for interaction of two dynamic bodies
NASA Technical Reports Server (NTRS)
Prabhakar, A.; Palermo, L. G.
1987-01-01
During the launch sequence of any space vehicle complicated boundary interactions occur between the vehicle and the launch stand. At the start of the sequence large forces exist between the two; contact is then broken in a short but finite time which depends on the release mechanism. The resulting vehicle response produces loads which are very high and often form the design case. It is known that the treatment of the launch pad as a second dynamic body is significant for an accurate prediction of launch response. A technique was developed for obtaining loads generated by the launch transient with the effect of pad dynamics included. The method solves uncoupled vehicle and pad equations of motion. The use of uncoupled models allows the simulation of vehicle launch in a single computer run. Modal formulation allows a closed-form solution to be written, eliminating any need for a numerical integration algorithm. When the vehicle is on the pad the uncoupled pad and vehicle equations have to be modified to account for the constraints they impose on each other. This necessitates the use of an iterative procedure to converge to a solution, using Lagrange multipliers to apply the required constraints. As the vehicle lifts off the pad the coupling between the vehicle and the pad is eliminated point by point until the vehicle flies free. Results obtained by this method were shown to be in good agreement with observed loads and other analysis methods. The resulting computer program is general, and was used without modification to solve a variety of contact problems.
NASA Astrophysics Data System (ADS)
Tóth, Balázs
2018-03-01
Some new dual and mixed variational formulations based on a priori nonsymmetric stresses will be developed for linearly coupled irreversible thermoelastodynamic problems associated with second sound effect according to the Lord-Shulman theory. Having introduced the entropy flux vector instead of the entropy field and defining the dissipation and the relaxation potential as the function of the entropy flux, a seven-field dual and mixed variational formulation will be derived from the complementary Biot-Hamilton-type variational principle, using the Lagrange multiplier method. The momentum-, the displacement- and the infinitesimal rotation vector, and the a priori nonsymmetric stress tensor, the temperature change, the entropy field and its flux vector are considered as the independent field variables of this formulation. In order to handle appropriately the six different groups of temporal prescriptions in the relaxed- and/or the strong form, two variational integrals will be incorporated into the seven-field functional. Then, eliminating the entropy from this formulation through the strong fulfillment of the constitutive relation for the temperature change with the use of the Legendre transformation between the enthalpy and Gibbs potential, a six-field dual and mixed action functional is obtained. As a further development, the elimination of the momentum- and the velocity vector from the six-field principle through the a priori satisfaction of the kinematic equation and the constitutive relation for the momentum vector leads to a five-field variational formulation. These principles are suitable for the transient analyses of the structures exposed to a thermal shock of short temporal domain or a large heat flux.
A methodology for constraining power in finite element modeling of radiofrequency ablation.
Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng
2017-07-01
Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.
Non Abelian T-duality in Gauged Linear Sigma Models
Bizet, Nana Cabo; Martínez-Merino, Aldo; Zayas, Leopoldo A. Pando; ...
2018-04-01
Abelian T-duality in Gauged Linear Sigma Models (GLSM) forms the basis of the physical understanding of Mirror Symmetry as presented by Hori and Vafa. We consider an alternative formulation of Abelian T-duality on GLSM’s as a gauging of a global U(1) symmetry with the addition of appropriate Lagrange multipliers. For GLSMs with Abelian gauge groups and without superpotential we reproduce the dual models introduced by Hori and Vafa. We extend the construction to formulate non-Abelian T-duality on GLSMs with global non-Abelian symmetries. The equations of motion that lead to the dual model are obtained for a general group, they dependmore » in general on semi-chiral superfields; for cases such as SU(2) they depend on twisted chiral superfields. We solve the equations of motion for an SU(2) gauged group with a choice of a particular Lie algebra direction of the vector superfield. This direction covers a non-Abelian sector that can be described by a family of Abelian dualities. The dual model Lagrangian depends on twisted chiral superfields and a twisted superpotential is generated. We explore some non-perturbative aspects by making an Ansatz for the instanton corrections in the dual theories. We verify that the effective potential for the U(1) field strength in a fixed configuration on the original theory matches the one of the dual theory. Imposing restrictions on the vector superfield, more general non-Abelian dual models are obtained. We analyze the dual models via the geometry of their susy vacua.« less
Uses and abuses of multipliers in the stand prognosis model
David A. Hamilton
1994-01-01
Users of the Stand Prognosis Model may have difficulties in selecting the proper set of multipliers to simulate a desired effect or in determining the appropriate value to assign to selected multipliers. A series of examples describe impact of multipliers on simulated stand development. Guidelines for the proper use of multipliers are presented....
Faster Double-Size Bipartite Multiplication out of Montgomery Multipliers
NASA Astrophysics Data System (ADS)
Yoshino, Masayuki; Okeya, Katsuyuki; Vuillaume, Camille
This paper proposes novel algorithms for computing double-size modular multiplications with few modulus-dependent precomputations. Low-end devices such as smartcards are usually equipped with hardware Montgomery multipliers. However, due to progresses of mathematical attacks, security institutions such as NIST have steadily demanded longer bit-lengths for public-key cryptography, making the multipliers quickly obsolete. In an attempt to extend the lifespan of such multipliers, double-size techniques compute modular multiplications with twice the bit-length of the multipliers. Techniques are known for extending the bit-length of classical Euclidean multipliers, of Montgomery multipliers and the combination thereof, namely bipartite multipliers. However, unlike classical and bipartite multiplications, Montgomery multiplications involve modulus-dependent precomputations, which amount to a large part of an RSA encryption or signature verification. The proposed double-size technique simulates double-size multiplications based on single-size Montgomery multipliers, and yet precomputations are essentially free: in an 2048-bit RSA encryption or signature verification with public exponent e=216+1, the proposal with a 1024-bit Montgomery multiplier is at least 1.5 times faster than previous double-size Montgomery multiplications.
NASA Astrophysics Data System (ADS)
Rais, Muhammad H.
2010-06-01
This paper presents Field Programmable Gate Array (FPGA) implementation of standard and truncated multipliers using Very High Speed Integrated Circuit Hardware Description Language (VHDL). Truncated multiplier is a good candidate for digital signal processing (DSP) applications such as finite impulse response (FIR) and discrete cosine transform (DCT). Remarkable reduction in FPGA resources, delay, and power can be achieved using truncated multipliers instead of standard parallel multipliers when the full precision of the standard multiplier is not required. The truncated multipliers show significant improvement as compared to standard multipliers. Results show that the anomaly in Spartan-3 AN average connection and maximum pin delay have been efficiently reduced in Virtex-4 device.
Han, Zifa; Leung, Chi Sing; So, Hing Cheung; Constantinides, Anthony George
2017-08-15
A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.
Plasmonic Roche lobe in metal-dielectric-metal structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiu, Ruei-Cheng; Lan, Yung-Chiang
2013-07-15
This study investigates a plasmonic Roche lobe that is based on a metal-dielectric-metal (MDM) structure using finite-difference time-domain simulations and theoretical analyses. The effective refractive index of the MDM structure has two centers and is inversely proportional to the distance from the position of interest to the centers, in a manner that is analogous to the gravitational potential in a two-star system. The motion of surface plasmons (SPs) strongly depends on the ratio of permittivities at the two centers. The Lagrange point is an unstable equilibrium point for SPs that propagate in the system. After the SPs have passed throughmore » the Lagrange point, their spread drastically increases.« less
Domain decomposition methods for nonconforming finite element spaces of Lagrange-type
NASA Technical Reports Server (NTRS)
Cowsar, Lawrence C.
1993-01-01
In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.
A comparison of VLSI architecture of finite field multipliers using dual, normal or standard basis
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Reed, I. S.
1987-01-01
Three different finite field multipliers are presented: (1) a dual basis multiplier due to Berlekamp; (2) a Massy-Omura normal basis multiplier; and (3) the Scott-Tavares-Peppard standard basis multiplier. These algorithms are chosen because each has its own distinct features which apply most suitably in different areas. Finally, they are implemented on silicon chips with nitride metal oxide semiconductor technology so that the multiplier most desirable for very large scale integration implementations can readily be ascertained.
Entropy, recycling and macroeconomics of water resources
NASA Astrophysics Data System (ADS)
Karakatsanis, Georgios; Mamassis, Nikos; Koutsoyiannis, Demetris
2014-05-01
We propose a macroeconomic model for water quantity and quality supply multipliers derived by water recycling (Karakatsanis et al. 2013). Macroeconomic models that incorporate natural resource conservation have become increasingly important (European Commission et al. 2012). In addition, as an estimated 80% of globally used freshwater is not reused (United Nations 2012), under increasing population trends, water recycling becomes a solution of high priority. Recycling of water resources creates two major conservation effects: (1) conservation of water in reservoirs and aquifers and (2) conservation of ecosystem carrying capacity due to wastewater flux reduction. Statistical distribution properties of the recycling efficiencies -on both water quantity and quality- for each sector are of vital economic importance. Uncertainty and complexity of water reuse in sectors are statistically quantified by entropy. High entropy of recycling efficiency values signifies greater efficiency dispersion; which -in turn- may indicate the need for additional infrastructure for the statistical distribution's both shifting and concentration towards higher efficiencies that lead to higher supply multipliers. Keywords: Entropy, water recycling, water supply multipliers, conservation, recycling efficiencies, macroeconomics References 1. European Commission (EC), Food and Agriculture Organization (FAO), International Monetary Fund (IMF), Organization of Economic Cooperation and Development (OECD), United Nations (UN) and World Bank (2012), System of Environmental and Economic Accounting (SEEA) Central Framework (White cover publication), United Nations Statistics Division 2. Karakatsanis, G., N. Mamassis, D. Koutsoyiannis and A. Efstratiades (2013), Entropy and reliability of water use via a statistical approach of scarcity, 5th EGU Leonardo Conference - Hydrofractals 2013 - STAHY '13, Kos Island, Greece, European Geosciences Union, International Association of Hydrological Sciences, International Union of Geodesy and Geophysics 3. United Nations (UN) (2012), World Water Development Report 4, UNESCO Publishing
Pipeline active filter utilizing a booth type multiplier
NASA Technical Reports Server (NTRS)
Nathan, Robert (Inventor)
1987-01-01
Multiplier units of the modified Booth decoder and carry-save adder/full adder combination are used to implement a pipeline active filter wherein pixel data is processed sequentially, and each pixel need only be accessed once and multiplied by a predetermined number of weights simultaneously, one multiplier unit for each weight. Each multiplier unit uses only one row of carry-save adders, and the results are shifted to less significant multiplier positions and one row of full adders to add the carry to the sum in order to provide the correct binary number for the product Wp. The full adder is also used to add this product Wp to the sum of products .SIGMA.Wp from preceding multiply units. If m.times.m multiplier units are pipelined, the system would be capable of processing a kernel array of m.times.m weighting factors.
Neutron skyshine measurements at Fermilab.
Cossairt, J D; Coulson, L V
1985-02-01
Neutron skyshine has been a significant source of environmental radiation exposure at many high-energy proton accelerators. A particularly troublesome source of skyshine neutrons has existed at Fermilab during operation of the 400-GeV high-energy physics program. This paper reports on several measurements of this source made with a DePangher precision long counter at large distances. The spatial distribution of the neutron skyshine can approximately be described as an inverse square law dependence multiplied by an exponential with an approximate attenuation length of 1200 +/- 300 m. The absolute magnitude of the distributions can be matched directly to the conventionally measured absorbed dose distribution near the source.
NASA Astrophysics Data System (ADS)
Subanti, S.; Hakim, A. R.; Hakim, I. M.
2018-03-01
This purpose of the current study aims is to analyze the multiplier analysis on mining sector in Indonesia. The mining sectors defined by coal and metal; crude oil, natural gas, and geothermal; and other mining and quarrying. The multiplier analysis based from input output analysis, this divided by income multiplier and output multiplier. This results show that (1) Indonesian mining sectors ranked 6th with contribute amount of 6.81% on national total output; (2) Based on total gross value added, this sector contribute amount of 12.13% or ranked 4th; (3) The value from income multiplier is 0.7062 and the value from output multiplier is 1.2426.
Dynamic analysis and control PID path of a model type gantry crane
NASA Astrophysics Data System (ADS)
Ospina-Henao, P. A.; López-Suspes, Framsol
2017-06-01
This paper presents an alternate form for the dynamic modelling of a mechanical system that simulates in real life a gantry crane type, using Euler’s classical mechanics and Lagrange formalism, which allows find the equations of motion that our model describe. Moreover, it has a basic model design system using the SolidWorks software, based on the material and dimensions of the model provides some physical variables necessary for modelling. In order to verify the theoretical results obtained, a contrast was made between solutions obtained by simulation in SimMechanics-Matlab and Euler-Lagrange equations system, has been solved through Matlab libraries for solving equation’s systems of the type and order obtained. The force is determined, but not as exerted by the spring, as this will be the control variable. The objective is to bring the mass of the pendulum from one point to another with a specified distance without the oscillation from it, so that, the answer is overdamped. This article includes an analysis of PID control in which the equations of motion of Euler-Lagrange are rewritten in the state space, once there, they were implemented in Simulink to get the natural response of the system to a step input in F and then draw the desired trajectories.
Lacroix, Rémy; Da Silva, Serge; Gaig, Monica Viaplana; Rousseau, Raphael; Délia, Marie-Line; Bergel, Alain
2014-11-07
The theoretical bases for modelling the distribution of the electrostatic potential in microbial electrochemical systems are described. The secondary potential distribution (i.e. without mass transport limitation of the substrate) is shown to be sufficient to validly address microbial electrolysis cells (MECs). MECs are modelled with two different ionic conductivities of the solution (1 and 5.3 S m(-1)) and two bioanode kinetics (jmax = 5.8 or 34 A m(-2)). A conventional reactor configuration, with the anode and the cathode face to face, is compared with a configuration where the bioanode perpendicular to the cathode implements the electrochemical reaction on its two sides. The low solution conductivity is shown to have a crucial impact, which cancels out the advantages obtained by setting the bioanode perpendicular to the cathode. For the same reason, when the surface area of the anode is increased by multiplying the number of plates, care must be taken not to create too dense anode architecture. Actually, the advantages of increasing the surface area by multiplying the number of plates can be lost through worsening of the electrochemical conditions in the multi-layered anode, because of the increase of the electrostatic potential of the solution inside the anode structure. The model gives the first theoretical bases for scaling up MECs in a rather simple but rigorous way.
NASA Astrophysics Data System (ADS)
Charles, Alexandre; Ballard, Patrick
2016-08-01
The dynamics of mechanical systems with a finite number of degrees of freedom (discrete mechanical systems) is governed by the Lagrange equation which is a second-order differential equation on a Riemannian manifold (the configuration manifold). The handling of perfect (frictionless) unilateral constraints in this framework (that of Lagrange's analytical dynamics) was undertaken by Schatzman and Moreau at the beginning of the 1980s. A mathematically sound and consistent evolution problem was obtained, paving the road for many subsequent theoretical investigations. In this general evolution problem, the only reaction force which is involved is a generalized reaction force, consistently with the virtual power philosophy of Lagrange. Surprisingly, such a general formulation was never derived in the case of frictional unilateral multibody dynamics. Instead, the paradigm of the Coulomb law applying to reaction forces in the real world is generally invoked. So far, this paradigm has only enabled to obtain a consistent evolution problem in only some very few specific examples and to suggest numerical algorithms to produce computational examples (numerical modeling). In particular, it is not clear what is the evolution problem underlying the computational examples. Moreover, some of the few specific cases in which this paradigm enables to write down a precise evolution problem are known to show paradoxes: the Painlevé paradox (indeterminacy) and the Kane paradox (increase in kinetic energy due to friction). In this paper, we follow Lagrange's philosophy and formulate the frictional unilateral multibody dynamics in terms of the generalized reaction force and not in terms of the real-world reaction force. A general evolution problem that governs the dynamics is obtained for the first time. We prove that all the solutions are dissipative; that is, this new formulation is free of Kane paradox. We also prove that some indeterminacy of the Painlevé paradox is fixed in this formulation.
USDA-ARS?s Scientific Manuscript database
Potato has about 100 wild species relatives that are multiplied in the form of botanical seed populations by genebanks, and distributed for use in research and breeding, so factors that affect long term seed germination are of interest. In 1987 the US Potato Genebank conducted routine seed multiplic...
Tables of compound-discount interest rate multipliers for evaluating forestry investments.
Allen L. Lundgren
1971-01-01
Tables, prepared by computer, are presented for 10 selected compound-discount interest rate multipliers commonly used in financial analyses of forestry investments. Two set of tables are given for each of the 10 multipliers. The first set gives multipliers for each year from 1 to 40 years; the second set gives multipliers at 5-year intervals from 5 to 160 years....
Madden, M; Batey Pwj
1983-05-01
Some problems associated with demographic-economic forecasting include finding models appropriate for a declining economy with unemployment, using a multiregional approach in an interregional model, finding a way to show differential consumption while endogenizing unemployment, and avoiding unemployment inconsistencies. The solution to these problems involves the construction of an activity-commodity framework, locating it within a group of forecasting models, and indicating possible ratios towards dynamization of the framework. The authors demonstrate the range of impact multipliers that can be derived from the framework and show how these multipliers relate to Leontief input-output multipliers. It is shown that desired population distribution may be obtained by selecting instruments from the economic sphere to produce, through the constraints vector of an activity-commodity framework, targets selected from demographic activities. The next step in this process, empirical exploitation, was carried out by the authors in the United Kingdom, linking an input-output model with a wide selection of demographic and demographic-economic variables. The generally tenuous control which government has over any variables in systems of this type, especially in market economies, makes application in the policy field of the optimization approach a partly conjectural exercise, although the analytic capacity of the approach can provide clear indications of policy directions.
Isometric deformations of unstretchable material surfaces, a spatial variational treatment
NASA Astrophysics Data System (ADS)
Chen, Yi-Chao; Fosdick, Roger; Fried, Eliot
2018-07-01
The stored energy of an unstretchable material surface is assumed to depend only upon the curvature tensor. By control of its edge(s), the surface is deformed isometrically from its planar undistorted reference configuration into an equilibrium shape. That shape is to be determined from a suitably constrained variational problem as a state of relative minimal potential energy. We pose the variational problem as one of relative minimum potential energy in a spatial form, wherein the deformation of a flat, undistorted region D in E2 to its distorted form S in E3 is assumed specified. We then apply the principle that the first variation of the potential energy, expressed as a functional over S ∪ ∂S , must vanish for all admissible variations that correspond to isometric deformations from the distorted configuration S and that also contain the essence of flatness that characterizes the reference configuration D , but is not covered by the single statement that the variation of S correspond to an isometric deformation. We emphasize the commonly overlooked condition that the spatial expression of the variational problem requires an additional variational constraint of zero Gaussian curvature to ensure that variations from S that are isometric deformations also contain the notion of flatness. In this context, it is particularly revealing to observe that the two constraints produce distinct, but essential and complementary, conditions on the first variation of S. The resulting first variation integral condition, together with the constraints, may be applied, for example, to the case of a flat, undistorted, rectangular strip D that is deformed isometrically into a closed ring S by connecting its short edges and specifying that its long edges are free of loading and, therefore, subject to zero traction and couple traction. The elementary example of a closed ring without twist as a state of relative minimum potential energy is discussed in detail, and the bending of the strip by opposing specific bending moments on its short edges is treated as a particular case. Finally, the constrained variational problem, with the introduction of appropriate constraint reactions as Lagrangian multipliers to account for the requirements that the deformation from D to S is isometric and that D is flat, is formulated in the spatial form, and the associated Euler-Lagrange equations are derived. We then solve the Euler-Lagrange equations for two representative problems in which a planar undistorted rectangular material strip is isometrically deformed by applied edge tractions and couple tractions (i.e., specific edge moments) into (i) a bent and twisted circular cylindrical helical state, and (ii) a state conformal with the surface of a right circular conical form.
2018-01-01
The genus Liolaemus comprises more than 260 species and can be divided in two subgenera: Eulaemus and Liolaemus sensu stricto. In this paper, we present a phylogenetic analysis, divergence times, and ancestral distribution ranges of the Liolaemus alticolor-bibronii group (Liolaemus sensu stricto subgenus). We inferred a total evidence phylogeny combining molecular (Cytb and 12S genes) and morphological characters using Maximum Parsimony and Bayesian Inference. Divergence times were calculated using Bayesian MCMC with an uncorrelated lognormal distributed relaxed clock, calibrated with a fossil record. Ancestral ranges were estimated using the Dispersal-Extinction-Cladogenesis (DEC-Lagrange). Effects of some a priori parameters of DEC were also tested. Distribution ranged from central Perú to southern Argentina, including areas at sea level up to the high Andes. The L. alticolor-bibronii group was recovered as monophyletic, formed by two clades: L. walkeri and L. gracilis, the latter can be split in two groups. Additionally, many species candidates were recognized. We estimate that the L. alticolor-bibronii group diversified 14.5 Myr ago, during the Middle Miocene. Our results suggest that the ancestor of the Liolaemus alticolor-bibronii group was distributed in a wide area including Patagonia and Puna highlands. The speciation pattern follows the South-North Diversification Hypothesis, following the Andean uplift. PMID:29479502
An Introduction to Lagrangian Differential Calculus.
ERIC Educational Resources Information Center
Schremmer, Francesca; Schremmer, Alain
1990-01-01
Illustrates how Lagrange's approach applies to the differential calculus of polynomial functions when approximations are obtained. Discusses how to obtain polynomial approximations in other cases. (YP)
Spherical Pendulum Small Oscillations for Slewing Crane Motion
Perig, Alexander V.; Stadnik, Alexander N.; Deriglazov, Alexander I.
2014-01-01
The present paper focuses on the Lagrange mechanics-based description of small oscillations of a spherical pendulum with a uniformly rotating suspension center. The analytical solution of the natural frequencies' problem has been derived for the case of uniform rotation of a crane boom. The payload paths have been found in the inertial reference frame fixed on earth and in the noninertial reference frame, which is connected with the rotating crane boom. The numerical amplitude-frequency characteristics of the relative payload motion have been found. The mechanical interpretation of the terms in Lagrange equations has been outlined. The analytical expression and numerical estimation for cable tension force have been proposed. The numerical computational results, which correlate very accurately with the experimental observations, have been shown. PMID:24526891
Variational tricomplex of a local gauge system, Lagrange structure and weak Poisson bracket
NASA Astrophysics Data System (ADS)
Sharapov, A. A.
2015-09-01
We introduce the concept of a variational tricomplex, which is applicable both to variational and nonvariational gauge systems. Assigning this tricomplex with an appropriate symplectic structure and a Cauchy foliation, we establish a general correspondence between the Lagrangian and Hamiltonian pictures of one and the same (not necessarily variational) dynamics. In practical terms, this correspondence allows one to construct the generating functional of a weak Poisson structure starting from that of a Lagrange structure. As a byproduct, a covariant procedure is proposed for deriving the classical BRST charge of the BFV formalism by a given BV master action. The general approach is illustrated by the examples of Maxwell’s electrodynamics and chiral bosons in two dimensions.
Automobile Industry Retail Price Equivalent and Indirect Cost Multipliers
This report develops a modified multiplier, referred to as an indirect cost (IC) multiplier, which specifically evaluates the components of indirect costs that are likely to be affected by vehicle modifications associated with environmental regulation. A range of IC multipliers a...
Aerial cooperative transporting and assembling control using multiple quadrotor-manipulator systems
NASA Astrophysics Data System (ADS)
Qi, Yuhua; Wang, Jianan; Shan, Jiayuan
2018-02-01
In this paper, a fully distributed control scheme for aerial cooperative transporting and assembling is proposed using multiple quadrotor-manipulator systems with each quadrotor equipped with a robotic manipulator. First, the kinematic and dynamic models of a quadrotor with multi-Degree of Freedom (DOF) robotic manipulator are established together using Euler-Lagrange equations. Based on the aggregated dynamic model, the control scheme consisting of position controller, attitude controller and manipulator controller is presented. Regarding cooperative transporting and assembling, multiple quadrotor-manipulator systems should be able to form a desired formation without collision among quadrotors from any initial position. The desired formation is achieved by the distributed position controller and attitude controller, while the collision avoidance is guaranteed by an artificial potential function method. Then, the transporting and assembling tasks request the manipulators to reach the desired angles cooperatively, which is achieved by the distributed manipulator controller. The overall stability of the closed-loop system is proven by a Lyapunov method and Matrosov's theorem. In the end, the proposed control scheme is simplified for the real application and then validated by two formation flying missions of four quadrotors with 2-DOF manipulators.
Estimating the Propagation of Interdependent Cascading Outages with Multi-Type Branching Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Ju, Wenyun; Sun, Kai
In this paper, the multi-type branching process is applied to describe the statistics and interdependencies of line outages, the load shed, and isolated buses. The offspring mean matrix of the multi-type branching process is estimated by the Expectation Maximization (EM) algorithm and can quantify the extent of outage propagation. The joint distribution of two types of outages is estimated by the multi-type branching process via the Lagrange-Good inversion. The proposed model is tested with data generated by the AC OPA cascading simulations on the IEEE 118-bus system. The largest eigenvalues of the offspring mean matrix indicate that the system ismore » closer to criticality when considering the interdependence of different types of outages. Compared with empirically estimating the joint distribution of the total outages, good estimate is obtained by using the multitype branching process with a much smaller number of cascades, thus greatly improving the efficiency. It is shown that the multitype branching process can effectively predict the distribution of the load shed and isolated buses and their conditional largest possible total outages even when there are no data of them.« less
CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.
Planar diode multiplier chains for THz spectroscopy
NASA Technical Reports Server (NTRS)
Maiwald, Frank W.; Drouin, Brian J.; Pearson, John C.; Mehdi, Imran; Lewena, Frank; Endres, Christian; Winnewisser, Gisbert
2005-01-01
High-resolution laboratory spectroscopy is utilized as a diagnostic tool to determine noise and harmonic content of balanced [9]-[11] and unbalanced [12]-[14] multiplier designs. Balanced multiplier designs suppress unintended harmonics more than -20dB. Much smaller values were measured on unbalanced multipliers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yijian; Hong, Mingyi; Dall'Anese, Emiliano
This paper considers power distribution systems featuring renewable energy sources (RESs), and develops a distributed optimization method to steer the RES output powers to solutions of AC optimal power flow (OPF) problems. The design of the proposed method leverages suitable linear approximations of the AC-power flow equations, and is based on the Alternating Direction Method of Multipliers (ADMM). Convergence of the RES-inverter output powers to solutions of the OPF problem is established under suitable conditions on the stepsize as well as mismatches between the commanded setpoints and actual RES output powers. In a broad sense, the methods and results proposedmore » here are also applicable to other distributed optimization problem setups with ADMM and inexact dual updates.« less
Coordinated Dispersal and Pre-Isthmian Assembly of the Central American Ichthyofauna
Tagliacollo, Victor A.; Duke-Sylvester, Scott M.; Matamoros, Wilfredo A.; Chakrabarty, Prosanta
2017-01-01
Abstract We document patterns of coordinated dispersal over evolutionary time frames in heroine cichlids and poeciliine live-bearers, the two most species-rich clades of freshwater fishes in the Caribbean basin. Observed dispersal rate (DO) values were estimated from time-calibrated molecular phylogenies in Lagrange+, a modified version of the ML-based parametric biogeographic program Lagrange. DO is measured in units of “wallaces” (wa) as the number of biogeographic range-expansion events per million years. DO estimates were generated on a dynamic paleogeographic landscape of five areas over three time intervals from Upper Cretaceous to Recent. Expected dispersal rate (DE) values were generated from alternative paleogeographic models, with dispersal rates proportional to target area and source-river discharge volume, and inversely proportional to paleogeographic distance. Correlations between DO and DE were used to assess the relative contributions of these three biogeographic parameters. DO estimates imply a persistent dispersal corridor across the Eastern (Antillean) margin of the Caribbean plate, under the influence of prevailing and perennial riverine discharge vectors such as the Proto–Orinoco–Amazon river. Ancestral area estimation places the earliest colonizations of the Greater Antilles and Central America during the Paleocene–Eocene (ca. 58–45 Ma), potentially during the existence of an incomplete Paleogene Arc (∼59 Ma) or Lesser Antilles Arc (∼45 Ma), but predating the GAARlandia land bridge (∼34–33 Ma). Paleogeographic distance is the single best predictor of DO. The Western (Central American) plate margin did not serve as a dispersal corridor until the Late Neogene (12–0 Ma), and contributed relatively little to the formation of modern distributions. PMID:26370565
DISCOS- DYNAMIC INTERACTION SIMULATION OF CONTROLS AND STRUCTURES (DEC VAX VERSION)
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1994-01-01
The Dynamic Interaction Simulation of Controls and Structure (DISCOS) program was developed for the dynamic simulation and stability analysis of passive and actively controlled spacecraft. In the use of DISCOS, the physical system undergoing analysis may be generally described as a cluster of contiguous flexible structures (bodies) that comprise a mechanical system, such as a spacecraft. The entire system (spacecraft) or portions thereof may be either spinning or nonspinning. Member bodies of the system may undergo large relative excursions, such as those of appendage deployment or rotor/ stator motion. The general system of bodies is, by its inherent nature, a feedback system in which inertial forces (such as those due to centrifugal and Coriolis acceleration) and the restoring and damping forces are motion-dependent. The system may possess a control system in which certain position and rate errors are actively controlled through the use of reaction control jets, servomotors, or momentum wheels. Bodies of the system may be interconnected by linear or nonlinear springs and dampers, by a gimbal and slider block mechanism, or by any combination of these. The DISCOS program can be used to obtain nonlinear and linearized time response of the system, interaction constant forces in the system, total system resonance properties, and frequency domain response and stability information for the system. DISCOS is probably the most powerful computational tool to date for the computer simulation of actively controlled coupled multi-flexible-body systems. The program is not easy to understand and effectively apply, but is not intended for simple problems. The DISCOS user is expected to have extensive working knowledge of rigid-body and flexible-body dynamics, finite-element techniques, numerical methods, and frequency-domain analysis. Various applications of DISCOS include simulation of the Shuttle payload deployment/retrieval mechanism, solar panel array deployment, antenna deployment, analysis of multispin satellites, and analysis of large, highly flexible satellites, including the design of attitude-control systems. The overall approach of DISCOS is unique in that any member body of the system may be flexible, and the system is not restricted to a topological tree configuration. The equations of motion are developed using the most general form of Lagrange's equations, including auxiliary nonholonomic rehenomic conditions of constraint. Lagrange multipliers are used as interaction forces/ torques to maintain prescribed constraints. Nonlinear flexible/rigid dynamic coupling effects are accounted for in unabridged fashion for individual bodies and for the total system. Elastic deformation can be represented by normal vibration modes or by any adequate series of Rayleigh functions, including 'quasi-static' displacement functions. To 'solve' Lagrange's equations of motion, the explicit form of the kinetic and potential energy functions, the dissipation function, and the form of the transformation relating ordinary Cartesian position coordinates to the generalized coordinates must be defined. The potential energy and dissipation functions for a structure are determined with standard finite-element techniques by the NASTRAN program. In order to use the computed functions, the Lagrange's equations and the system kinematic constraint equations are expressed in matrix format. These differential matrix equations are solved numerically by the DISCOS program. Provisions are included for environmental loading of the structure (spacecraft), including solar pressure, gravity gradient, and aerodynamic drag. Input to DISCOS includes topological and geometrical descriptions of the structure under analysis, initial conditions, control system descriptions, and NASTRAN-derived structural matrices. Specialized routines are supplied that read the input data and redimension the DISCOS programs to minimize core requirements. Output includes an extensive list of calculated parameters for each body of the structure, system state vector and its time derivatives, euler angles and position coordinates and their time derivatives, control system variables and their time derivatives, and various system parameters at a given simulation time. For linearized system analysis, output includes the various transfer matrices, eigenvectors, and calculated eigenvalues. The DISCOS program is available by license for a period of ten (10) years to approved licensees. The licensed program product delivered includes the source code and supporting documentation. Additional documentation may be purchased separately at any time. The IBM version of DISCOS is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 series computer under OS with a central memory requirement of approximately 1,100K of 8 bit bytes. The DEC VAX version of DISCOS is written in FORTRAN for batch execution and has been implemented on a DEC VAX series computer under VMS. For plotted output a SC4020 plotting system is required. DISCOS was developed on the IBM in 1978 and was adapted (with enhancements) to the DEC VAX in 1982.