NASA Astrophysics Data System (ADS)
Swensson, Richard G.; King, Jill L.; Good, Walter F.; Gur, David
2000-04-01
A constrained ROC formulation from probability summation is proposed for measuring observer performance in detecting abnormal findings on medical images. This assumes the observer's detection or rating decision on each image is determined by a latent variable that characterizes the specific finding (type and location) considered most likely to be a target abnormality. For positive cases, this 'maximum- suspicion' variable is assumed to be either the value for the actual target or for the most suspicious non-target finding, whichever is the greater (more suspicious). Unlike the usual ROC formulation, this constrained formulation guarantees a 'well-behaved' ROC curve that always equals or exceeds chance- level decisions and cannot exhibit an upward 'hook.' Its estimated parameters specify the accuracy for separating positive from negative cases, and they also predict accuracy in locating or identifying the actual abnormal findings. The present maximum-likelihood procedure (runs on PC with Windows 95 or NT) fits this constrained formulation to rating-ROC data using normal distributions with two free parameters. Fits of the conventional and constrained ROC formulations are compared for continuous and discrete-scale ratings of chest films in a variety of detection problems, both for localized lesions (nodules, rib fractures) and for diffuse abnormalities (interstitial disease, infiltrates or pnumothorax). The two fitted ROC curves are nearly identical unless the conventional ROC has an ill behaved 'hook,' below the constrained ROC.
Matter coupling in partially constrained vielbein formulation of massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felice, Antonio De; Mukohyama, Shinji; Gümrükçüoğlu, A. Emir
2016-01-01
We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metricmore » formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.« less
Matter coupling in partially constrained vielbein formulation of massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felice, Antonio De; Gümrükçüoğlu, A. Emir; Heisenberg, Lavinia
2016-01-04
We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metricmore » formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.« less
Semi-automatic brain tumor segmentation by constrained MRFs using structural trajectories.
Zhao, Liang; Wu, Wei; Corso, Jason J
2013-01-01
Quantifying volume and growth of a brain tumor is a primary prognostic measure and hence has received much attention in the medical imaging community. Most methods have sought a fully automatic segmentation, but the variability in shape and appearance of brain tumor has limited their success and further adoption in the clinic. In reaction, we present a semi-automatic brain tumor segmentation framework for multi-channel magnetic resonance (MR) images. This framework does not require prior model construction and only requires manual labels on one automatically selected slice. All other slices are labeled by an iterative multi-label Markov random field optimization with hard constraints. Structural trajectories-the medical image analog to optical flow and 3D image over-segmentation are used to capture pixel correspondences between consecutive slices for pixel labeling. We show robustness and effectiveness through an evaluation on the 2012 MICCAI BRATS Challenge Dataset; our results indicate superior performance to baselines and demonstrate the utility of the constrained MRF formulation.
Polynomial Size Formulations for the Distance and Capacity Constrained Vehicle Routing Problem
NASA Astrophysics Data System (ADS)
Kara, Imdat; Derya, Tusan
2011-09-01
The Distance and Capacity Constrained Vehicle Routing Problem (DCVRP) is an extension of the well known Traveling Salesman Problem (TSP). DCVRP arises in distribution and logistics problems. It would be beneficial to construct new formulations, which is the main motivation and contribution of this paper. We focused on two indexed integer programming formulations for DCVRP. One node based and one arc (flow) based formulation for DCVRP are presented. Both formulations have O(n2) binary variables and O(n2) constraints, i.e., the number of the decision variables and constraints grows with a polynomial function of the nodes of the underlying graph. It is shown that proposed arc based formulation produces better lower bound than the existing one (this refers to the Water's formulation in the paper). Finally, various problems from literature are solved with the node based and arc based formulations by using CPLEX 8.0. Preliminary computational analysis shows that, arc based formulation outperforms the node based formulation in terms of linear programming relaxation.
On the Miller-Tucker-Zemlin Based Formulations for the Distance Constrained Vehicle Routing Problems
NASA Astrophysics Data System (ADS)
Kara, Imdat
2010-11-01
Vehicle Routing Problem (VRP), is an extension of the well known Traveling Salesman Problem (TSP) and has many practical applications in the fields of distribution and logistics. When the VRP consists of distance based constraints it is called Distance Constrained Vehicle Routing Problem (DVRP). However, the literature addressing on the DVRP is scarce. In this paper, existing two-indexed integer programming formulations, having Miller-Tucker-Zemlin based subtour elimination constraints, are reviewed. Existing formulations are simplified and obtained formulation is presented as formulation F1. It is shown that, the distance bounding constraints of the formulation F1, may not generate the distance traveled up to the related node. To do this, we redefine the auxiliary variables of the formulation and propose second formulation F2 with new and easy to use distance bounding constraints. Adaptation of the second formulation to the cases where new restrictions such as minimal distance traveled by each vehicle or other objectives such as minimizing the longest distance traveled is discussed.
Constrained evolution in numerical relativity
NASA Astrophysics Data System (ADS)
Anderson, Matthew William
The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.
Constrained Deep Weak Supervision for Histopathology Image Segmentation.
Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan
2017-11-01
In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.
Spacecraft inertia estimation via constrained least squares
NASA Technical Reports Server (NTRS)
Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.
2006-01-01
This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
NASA Astrophysics Data System (ADS)
Londrillo, P.; del Zanna, L.
2004-03-01
We present a general framework to design Godunov-type schemes for multidimensional ideal magnetohydrodynamic (MHD) systems, having the divergence-free relation and the related properties of the magnetic field B as built-in conditions. Our approach mostly relies on the constrained transport (CT) discretization technique for the magnetic field components, originally developed for the linear induction equation, which assures [∇.B]num=0 and its preservation in time to within machine accuracy in a finite-volume setting. We show that the CT formalism, when fully exploited, can be used as a general guideline to design the reconstruction procedures of the B vector field, to adapt standard upwind procedures for the momentum and energy equations, avoiding the onset of numerical monopoles of O(1) size, and to formulate approximate Riemann solvers for the induction equation. This general framework will be named here upwind constrained transport (UCT). To demonstrate the versatility of our method, we apply it to a variety of schemes, which are finally validated numerically and compared: a novel implementation for the MHD case of the second-order Roe-type positive scheme by Liu and Lax [J. Comput. Fluid Dyn. 5 (1996) 133], and both the second- and third-order versions of a central-type MHD scheme presented by Londrillo and Del Zanna [Astrophys. J. 530 (2000) 508], where the basic UCT strategies have been first outlined.
COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY
Villalon, Julio; Joshi, Anand A.; Toga, Arthur W.; Thompson, Paul M.
2015-01-01
Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic “Demons” algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future. PMID:26925198
State-constrained booster trajectory solutions via finite elements and shooting
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.; Seywald, Hans
1993-01-01
This paper presents an extension of a FEM formulation based on variational principles. A general formulation for handling internal boundary conditions and discontinuities in the state equations is presented, and the general formulation is modified for optimal control problems subject to state-variable inequality constraints. Solutions which only touch the state constraint and solutions which have a boundary arc of finite length are considered. Suitable shape and test functions are chosen for a FEM discretization. All element quadrature (equivalent to one-point Gaussian quadrature over each element) may be done in closed form. The final form of the algebraic equations is then derived. A simple state-constrained problem is solved. Then, for a practical application of the use of the FEM formulation, a launch vehicle subject to a dynamic pressure constraint (a first-order state inequality constraint) is solved. The results presented for the launch-vehicle trajectory have some interesting features, including a touch-point solution.
Antunes, J; Debut, V
2017-02-01
Most musical instruments consist of dynamical subsystems connected at a number of constraining points through which energy flows. For physical sound synthesis, one important difficulty deals with enforcing these coupling constraints. While standard techniques include the use of Lagrange multipliers or penalty methods, in this paper, a different approach is explored, the Udwadia-Kalaba (U-K) formulation, which is rooted on analytical dynamics but avoids the use of Lagrange multipliers. This general and elegant formulation has been nearly exclusively used for conceptual systems of discrete masses or articulated rigid bodies, namely, in robotics. However its natural extension to deal with continuous flexible systems is surprisingly absent from the literature. Here, such a modeling strategy is developed and the potential of combining the U-K equation for constrained systems with the modal description is shown, in particular, to simulate musical instruments. Objectives are twofold: (1) Develop the U-K equation for constrained flexible systems with subsystems modelled through unconstrained modes; and (2) apply this framework to compute string/body coupled dynamics. This example complements previous work [Debut, Antunes, Marques, and Carvalho, Appl. Acoust. 108, 3-18 (2016)] on guitar modeling using penalty methods. Simulations show that the proposed technique provides similar results with a significant improvement in computational efficiency.
A robust approach to chance constrained optimal power flow with renewable generation
Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.
2016-09-01
Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less
A defect stream function, law of the wall/wake method for compressible turbulent boundary layers
NASA Technical Reports Server (NTRS)
Barnwell, Richard W.; Dejarnette, Fred R.; Wahls, Richard A.
1989-01-01
The application of the defect stream function to the solution of the two-dimensional, compressible boundary layer is examined. A law of the wall/law of the wake formulation for the inner part of the boundary layer is presented which greatly simplifies the computational task near the wall and eliminates the need for an eddy viscosity model in this region. The eddy viscosity model in the outer region is arbitrary. The modified Crocco temperature-velocity relationship is used as a simplification of the differential energy equation. Formulations for both equilibrium and nonequilibrium boundary layers are presented including a constrained zero-order form which significantly reduces the computational workload while retaining the significant physics of the flow. A formulation for primitive variables is also presented. Results are given for the constrained zero-order and second-order equilibrium formulations and are compared with experimental data. A compressible wake function valid near the wall has been developed from the present results.
Cournot games with network effects for electric power markets
NASA Astrophysics Data System (ADS)
Spezia, Carl John
The electric utility industry is moving from regulated monopolies with protected service areas to an open market with many wholesale suppliers competing for consumer load. This market is typically modeled by a Cournot game oligopoly where suppliers compete by selecting profit maximizing quantities. The classical Cournot model can produce multiple solutions when the problem includes typical power system constraints. This work presents a mathematical programming formulation of oligopoly that produces unique solutions when constraints limit the supplier outputs. The formulation casts the game as a supply maximization problem with power system physical limits and supplier incremental profit functions as constraints. The formulation gives Cournot solutions identical to other commonly used algorithms when suppliers operate within the constraints. Numerical examples demonstrate the feasibility of the theory. The results show that the maximization formulation will give system operators more transmission capacity when compared to the actions of suppliers in a classical constrained Cournot game. The results also show that the profitability of suppliers in constrained networks depends on their location relative to the consumers' load concentration.
A formulation and analysis of combat games
NASA Technical Reports Server (NTRS)
Heymann, M.; Ardema, M. D.; Rajan, N.
1984-01-01
Combat which is formulated as a dynamical encounter between two opponents, each of whom has offensive capabilities and objectives is outlined. A target set is associated with each opponent in the event space in which he endeavors to terminate the combat, thereby winning. If the combat terminates in both target sets simultaneously, or in neither, a joint capture or a draw, respectively, occurs. Resolution of the encounter is formulated as a combat game; as a pair of competing event constrained differential games. If exactly one of the players can win, the optimal strategies are determined from a resulting constrained zero sum differential game. Otherwise the optimal strategies are computed from a resulting nonzero sum game. Since optimal combat strategies may frequently not exist, approximate or delta combat games are also formulated leading to approximate or delta optimal strategies. The turret game is used to illustrate combat games. This game is sufficiently complex to exhibit a rich variety of combat behavior, much of which is not found in pursuit evasion games.
A simple strategy for jumping straight up.
Hemami, Hooshang; Wyman, Bostwick F
2012-05-01
Jumping from a stationary standing position into the air is a transition from a constrained motion in contact with the ground to an unconstrained system not in contact with the ground. A simple case of the jump, as it applies to humans, robots and humanoids, is studied in this paper. The dynamics of the constrained rigid body are expanded to define a larger system that accommodates the jump. The formulation is applied to a four-link, three-dimensional system in order to articulate the ballistic motion involved. The activity of the muscular system and the role of the major sagittal muscle groups are demonstrated. The control strategy, involving state feedback and central feed forward signals, is formulated and computer simulations are presented to assess the feasibility of the formulations, the strategy and the jump. Copyright © 2012 Elsevier Inc. All rights reserved.
Carneiro, Gustavo; Georgescu, Bogdan; Good, Sara; Comaniciu, Dorin
2008-09-01
We propose a novel method for the automatic detection and measurement of fetal anatomical structures in ultrasound images. This problem offers a myriad of challenges, including: difficulty of modeling the appearance variations of the visual object of interest, robustness to speckle noise and signal dropout, and large search space of the detection procedure. Previous solutions typically rely on the explicit encoding of prior knowledge and formulation of the problem as a perceptual grouping task solved through clustering or variational approaches. These methods are constrained by the validity of the underlying assumptions and usually are not enough to capture the complex appearances of fetal anatomies. We propose a novel system for fast automatic detection and measurement of fetal anatomies that directly exploits a large database of expert annotated fetal anatomical structures in ultrasound images. Our method learns automatically to distinguish between the appearance of the object of interest and background by training a constrained probabilistic boosting tree classifier. This system is able to produce the automatic segmentation of several fetal anatomies using the same basic detection algorithm. We show results on fully automatic measurement of biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), femur length (FL), humerus length (HL), and crown rump length (CRL). Notice that our approach is the first in the literature to deal with the HL and CRL measurements. Extensive experiments (with clinical validation) show that our system is, on average, close to the accuracy of experts in terms of segmentation and obstetric measurements. Finally, this system runs under half second on a standard dual-core PC computer.
NASA Technical Reports Server (NTRS)
Tseng, K.; Morino, L.
1975-01-01
A general formulation is presented for the analysis of steady and unsteady, subsonic and supersonic aerodynamics for complex aircraft configurations. The theoretical formulation, the numerical procedure, the description of the program SOUSSA (steady, oscillatory and unsteady, subsonic and supersonic aerodynamics) and numerical results are included. In particular, generalized forces for fully unsteady (complex frequency) aerodynamics for a wing-body configuration, AGARD wing-tail interference in both subsonic and supersonic flows as well as flutter analysis results are included. The theoretical formulation is based upon an integral equation, which includes completely arbitrary motion. Steady and oscillatory aerodynamic flows are considered. Here small-amplitude, fully transient response in the time domain is considered. This yields the aerodynamic transfer function (Laplace transform of the fully unsteady operator) for frequency domain analysis. This is particularly convenient for the linear systems analysis of the whole aircraft.
Analytical Dynamics and Nonrigid Spacecraft Simulation
NASA Technical Reports Server (NTRS)
Likins, P. W.
1974-01-01
Application to the simulation of idealized spacecraft are considered both for multiple-rigid-body models and for models consisting of combination of rigid bodies and elastic bodies, with the elastic bodies being defined either as continua, as finite-element systems, or as a collection of given modal data. Several specific examples are developed in detail by alternative methods of analytical mechanics, and results are compared to a Newton-Euler formulation. The following methods are developed from d'Alembert's principle in vector form: (1) Lagrange's form of d'Alembert's principle for independent generalized coordinates; (2) Lagrange's form of d'Alembert's principle for simply constrained systems; (3) Kane's quasi-coordinate formulation of D'Alembert's principle; (4) Lagrange's equations for independent generalized coordinates; (5) Lagrange's equations for simply constrained systems; (6) Lagrangian quasi-coordinate equations (or the Boltzmann-Hamel equations); (7) Hamilton's equations for simply constrained systems; and (8) Hamilton's equations for independent generalized coordinates.
Wavefield reconstruction inversion with a multiplicative cost function
NASA Astrophysics Data System (ADS)
da Silva, Nuno V.; Yao, Gang
2018-01-01
We present a method for the automatic estimation of the trade-off parameter in the context of wavefield reconstruction inversion (WRI). WRI formulates the inverse problem as an optimisation problem, minimising the data misfit while penalising with a wave equation constraining term. The trade-off between the two terms is balanced by a scaling factor that balances the contributions of the data-misfit term and the constraining term to the value of the objective function. If this parameter is too large then it implies penalizing for the wave equation imposing a hard constraint in the inversion. If it is too small, then this leads to a poorly constrained solution as it is essentially penalizing for the data misfit and not taking into account the physics that explains the data. This paper introduces a new approach for the formulation of WRI recasting its formulation into a multiplicative cost function. We demonstrate that the proposed method outperforms the additive cost function when the trade-off parameter is appropriately scaled in the latter, when adapting it throughout the iterations, and when the data is contaminated with Gaussian random noise. Thus this work contributes with a framework for a more automated application of WRI.
Ren, Hai-Sheng; Ming, Mei-Jun; Ma, Jian-Yi; Li, Xiang-Yuan
2013-08-22
Within the framework of constrained density functional theory (CDFT), the diabatic or charge localized states of electron transfer (ET) have been constructed. Based on the diabatic states, inner reorganization energy λin has been directly calculated. For solvent reorganization energy λs, a novel and reasonable nonequilibrium solvation model is established by introducing a constrained equilibrium manipulation, and a new expression of λs has been formulated. It is found that λs is actually the cost of maintaining the residual polarization, which equilibrates with the extra electric field. On the basis of diabatic states constructed by CDFT, a numerical algorithm using the new formulations with the dielectric polarizable continuum model (D-PCM) has been implemented. As typical test cases, self-exchange ET reactions between tetracyanoethylene (TCNE) and tetrathiafulvalene (TTF) and their corresponding ionic radicals in acetonitrile are investigated. The calculated reorganization energies λ are 7293 cm(-1) for TCNE/TCNE(-) and 5939 cm(-1) for TTF/TTF(+) reactions, agreeing well with available experimental results of 7250 cm(-1) and 5810 cm(-1), respectively.
NASA Astrophysics Data System (ADS)
Bottasso, C. L.; Croce, A.; Riboldi, C. E. D.
2014-06-01
The paper presents a novel approach for the synthesis of the open-loop pitch profile during emergency shutdowns. The problem is of interest in the design of wind turbines, as such maneuvers often generate design driving loads on some of the machine components. The pitch profile synthesis is formulated as a constrained optimal control problem, solved numerically using a direct single shooting approach. A cost function expressing a compromise between load reduction and rotor overspeed is minimized with respect to the unknown blade pitch profile. Constraints may include a load reduction not-to-exceed the next dominating loads, a not-to-be-exceeded maximum rotor speed, and a maximum achievable blade pitch rate. Cost function and constraints are computed over a possibly large number of operating conditions, defined so as to cover as well as possible the operating situations encountered in the lifetime of the machine. All such conditions are simulated by using a high-fidelity aeroservoelastic model of the wind turbine, ensuring the accuracy of the evaluation of all relevant parameters. The paper demonstrates the capabilities of the novel proposed formulation, by optimizing the pitch profile of a multi-MW wind turbine. Results show that the procedure can reliably identify optimal pitch profiles that reduce design-driving loads, in a fully automated way.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm
NASA Astrophysics Data System (ADS)
Chen, G.; Chacón, L.; Barnes, D. C.
2011-08-01
This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.
Exact charge and energy conservation in implicit PIC with mapped computational meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guangye; Barnes, D. C.
This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less
Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long
2015-05-01
This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest.
1982-10-01
Element Unconstrained Variational Formulations," Innovativ’e Numerical Analysis For the Applied Engineering Science, R. P. Shaw, et at, Fitor...Initial Boundary Value of Gun Dynamics Solved by Finite Element Unconstrained Variational Formulations," Innovative Numerical Analysis For the Applied ... Engineering Science, R. P. Shaw, et al, Editors, University Press of Virginia, Charlottesville, pp. 733-741, 1980. 2 J. J. Wu, "Solutions to Initial
A chance constraint estimation approach to optimizing resource management under uncertainty
Michael Bevers
2007-01-01
Chance-constrained optimization is an important method for managing risk arising from random variations in natural resource systems, but the probabilistic formulations often pose mathematical programming problems that cannot be solved with exact methods. A heuristic estimation method for these problems is presented that combines a formulation for order statistic...
A formulation and analysis of combat games
NASA Technical Reports Server (NTRS)
Heymann, M.; Ardema, M. D.; Rajan, N.
1985-01-01
Combat is formulated as a dynamical encounter between two opponents, each of whom has offensive capabilities and objectives. With each opponent is associated a target in the event space in which he endeavors to terminate the combat, thereby winning. If the combat terminates in both target sets simultaneously or in neither, a joint capture or a draw, respectively, is said to occur. Resolution of the encounter is formulated as a combat game; namely, as a pair of competing event-constrained differential games. If exactly one of the players can win, the optimal strategies are determined from a resulting constrained zero-sum differential game. Otherwise the optimal strategies are computed from a resulting non-zero-sum game. Since optimal combat strategies frequencies may not exist, approximate of delta-combat games are also formulated leading to approximate or delta-optimal strategies. To illustrate combat games, an example, called the turret game, is considered. This game may be thought of as a highly simplified model of air combat, yet it is sufficiently complex to exhibit a rich variety of combat behavior, much of which is not found in pursuit-evasion games.
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
NASA Astrophysics Data System (ADS)
Di Pietro, Daniele A.; Marche, Fabien
2018-02-01
In this paper, we further investigate the use of a fully discontinuous Finite Element discrete formulation for the study of shallow water free surface flows in the fully nonlinear and weakly dispersive flow regime. We consider a decoupling strategy in which we approximate the solutions of the classical shallow water equations supplemented with a source term globally accounting for the non-hydrostatic effects. This source term can be computed through the resolution of elliptic second-order linear sub-problems, which only involve second order partial derivatives in space. We then introduce an associated Symmetric Weighted Internal Penalty discrete bilinear form, allowing to deal with the discontinuous nature of the elliptic problem's coefficients in a stable and consistent way. Similar discrete formulations are also introduced for several recent optimized fully nonlinear and weakly dispersive models. These formulations are validated again several benchmarks involving h-convergence, p-convergence and comparisons with experimental data, showing optimal convergence properties.
NASA Astrophysics Data System (ADS)
Shao, H.; Huang, Y.; Kolditz, O.
2015-12-01
Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in porous media : application to gas migration in a nuclear waste repository, Comp.Geosciences. (2009), Volume 13, Number 1, 29-42.
Hamiltonian analysis for linearly acceleration-dependent Lagrangians
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cruz, Miguel, E-mail: miguelcruz02@uv.mx, E-mail: roussjgc@gmail.com, E-mail: molgado@fc.uaslp.mx, E-mail: efrojas@uv.mx; Gómez-Cortés, Rosario, E-mail: miguelcruz02@uv.mx, E-mail: roussjgc@gmail.com, E-mail: molgado@fc.uaslp.mx, E-mail: efrojas@uv.mx; Rojas, Efraín, E-mail: miguelcruz02@uv.mx, E-mail: roussjgc@gmail.com, E-mail: molgado@fc.uaslp.mx, E-mail: efrojas@uv.mx
2016-06-15
We study the constrained Ostrogradski-Hamilton framework for the equations of motion provided by mechanical systems described by second-order derivative actions with a linear dependence in the accelerations. We stress out the peculiar features provided by the surface terms arising for this type of theories and we discuss some important properties for this kind of actions in order to pave the way for the construction of a well defined quantum counterpart by means of canonical methods. In particular, we analyse in detail the constraint structure for these theories and its relation to the inherent conserved quantities where the associated energies togethermore » with a Noether charge may be identified. The constraint structure is fully analyzed without the introduction of auxiliary variables, as proposed in recent works involving higher order Lagrangians. Finally, we also provide some examples where our approach is explicitly applied and emphasize the way in which our original arrangement results in propitious for the Hamiltonian formulation of covariant field theories.« less
A BRST formulation for the conic constrained particle
NASA Astrophysics Data System (ADS)
Barbosa, Gabriel D.; Thibes, Ronaldo
2018-04-01
We describe the gauge invariant BRST formulation of a particle constrained to move in a general conic. The model considered constitutes an explicit example of an originally second-class system which can be quantized within the BRST framework. We initially impose the conic constraint by means of a Lagrange multiplier leading to a consistent second-class system which generalizes previous models studied in the literature. After calculating the constraint structure and the corresponding Dirac brackets, we introduce a suitable first-order Lagrangian, the resulting modified system is then shown to be gauge invariant. We proceed to the extended phase space introducing fermionic ghost variables, exhibiting the BRST symmetry transformations and writing the Green’s function generating functional for the BRST quantized model.
Distance Metric Learning via Iterated Support Vector Machines.
Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei
2017-07-11
Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.
Wagner-Hattler, Leonie; Schoelkopf, Joachim; Huwyler, Jörg; Puchkov, Maxim
2017-10-01
A new mineral-polymer composite (FCC-PCL) performance was assessed to produce complex geometries to aid in development of controlled release tablet formulations. The mechanical characteristics of a developed material such as compactibility, compressibility and elastoplastic deformation were measured. The results and comparative analysis versus other common excipients suggest efficient formation of a complex, stable and impermeable geometries for constrained drug release modifications under compression. The performance of the proposed composite material has been tested by compacting it into a geometrically altered tablet (Tablet-In-Cup, TIC) and the drug release was compared to commercially available product. The TIC device exhibited a uniform surface, showed high physical stability, and showed absence of friability. FCC-PCL composite had good binding properties and good compactibility. It was possible to reveal an enhanced plasticity characteristic of a new material which was not present in the individual components. The presented FCC-PCL composite mixture has the potential to become a successful tool to formulate controlled-release dosage solid forms.
NASA Technical Reports Server (NTRS)
Tseng, K.; Morino, L.
1975-01-01
A general formulation for the analysis of steady and unsteady, subsonic and supersonic potential aerodynamics for arbitrary complex geometries is presented. The theoretical formulation, the numerical procedure, and numerical results are included. In particular, generalized forces for fully unsteady (complex frequency) aerodynamics for an AGARD coplanar wing-tail interfering configuration in both subsonic and supersonic flows are considered.
Hamiltonian formulation of the KdV equation
NASA Astrophysics Data System (ADS)
Nutku, Y.
1984-06-01
We consider the canonical formulation of Whitham's variational principle for the KdV equation. This Lagrangian is degenerate and we have found it necessary to use Dirac's theory of constrained systems in constructing the Hamiltonian. Earlier discussions of the Hamiltonian structure of the KdV equation were based on various different decompositions of the field which is avoided by this new approach.
Geometric constrained variational calculus. II: The second variation (Part I)
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2016-10-01
Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X; Belcher, AH; Wiersma, R
Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimizationmore » and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.« less
Pseudo-updated constrained solution algorithm for nonlinear heat conduction
NASA Technical Reports Server (NTRS)
Tovichakchaikul, S.; Padovan, J.
1983-01-01
This paper develops efficiency and stability improvements in the incremental successive substitution (ISS) procedure commonly used to generate the solution to nonlinear heat conduction problems. This is achieved by employing the pseudo-update scheme of Broyden, Fletcher, Goldfarb and Shanno in conjunction with the constrained version of the ISS. The resulting algorithm retains the formulational simplicity associated with ISS schemes while incorporating the enhanced convergence properties of slope driven procedures as well as the stability of constrained approaches. To illustrate the enhanced operating characteristics of the new scheme, the results of several benchmark comparisons are presented.
Uniform magnetic fields in density-functional theory
NASA Astrophysics Data System (ADS)
Tellgren, Erik I.; Laestadius, Andre; Helgaker, Trygve; Kvaal, Simen; Teale, Andrew M.
2018-01-01
We construct a density-functional formalism adapted to uniform external magnetic fields that is intermediate between conventional density functional theory and Current-Density Functional Theory (CDFT). In the intermediate theory, which we term linear vector potential-DFT (LDFT), the basic variables are the density, the canonical momentum, and the paramagnetic contribution to the magnetic moment. Both a constrained-search formulation and a convex formulation in terms of Legendre-Fenchel transformations are constructed. Many theoretical issues in CDFT find simplified analogs in LDFT. We prove results concerning N-representability, Hohenberg-Kohn-like mappings, existence of minimizers in the constrained-search expression, and a restricted analog to gauge invariance. The issue of additivity of the energy over non-interacting subsystems, which is qualitatively different in LDFT and CDFT, is also discussed.
Strehl-constrained reconstruction of post-adaptive optics data and the Software Package AIRY, v. 6.1
NASA Astrophysics Data System (ADS)
Carbillet, Marcel; La Camera, Andrea; Deguignet, Jérémy; Prato, Marco; Bertero, Mario; Aristidi, Éric; Boccacci, Patrizia
2014-08-01
We first briefly present the last version of the Software Package AIRY, version 6.1, a CAOS-based tool which includes various deconvolution methods, accelerations, regularizations, super-resolution, boundary effects reduction, point-spread function extraction/extrapolation, stopping rules, and constraints in the case of iterative blind deconvolution (IBD). Then, we focus on a new formulation of our Strehl-constrained IBD, here quantitatively compared to the original formulation for simulated near-infrared data of an 8-m class telescope equipped with adaptive optics (AO), showing their equivalence. Next, we extend the application of the original method to the visible domain with simulated data of an AO-equipped 1.5-m telescope, testing also the robustness of the method with respect to the Strehl ratio estimation.
SMA Hybrid Composites for Dynamic Response Abatement Applications
NASA Technical Reports Server (NTRS)
Turner, Travis L.
2000-01-01
A recently developed constitutive model and a finite element formulation for predicting the thermomechanical response of Shape Memory Alloy (SMA) hybrid composite (SMAHC) structures is briefly described. Attention is focused on constrained recovery behavior in this study, but the constitutive formulation is also capable of modeling restrained or free recovery. Numerical results are shown for glass/epoxy panel specimens with embedded Nitinol actuators subjected to thermal and acoustic loads. Control of thermal buckling, random response, sonic fatigue, and transmission loss are demonstrated and compared to conventional approaches including addition of conventional composite layers and a constrained layer damping treatment. Embedded SMA actuators are shown to be significantly more effective in dynamic response abatement applications than the conventional approaches and are attractive for combination with other passive and/or active approaches.
Uniform magnetic fields in density-functional theory.
Tellgren, Erik I; Laestadius, Andre; Helgaker, Trygve; Kvaal, Simen; Teale, Andrew M
2018-01-14
We construct a density-functional formalism adapted to uniform external magnetic fields that is intermediate between conventional density functional theory and Current-Density Functional Theory (CDFT). In the intermediate theory, which we term linear vector potential-DFT (LDFT), the basic variables are the density, the canonical momentum, and the paramagnetic contribution to the magnetic moment. Both a constrained-search formulation and a convex formulation in terms of Legendre-Fenchel transformations are constructed. Many theoretical issues in CDFT find simplified analogs in LDFT. We prove results concerning N-representability, Hohenberg-Kohn-like mappings, existence of minimizers in the constrained-search expression, and a restricted analog to gauge invariance. The issue of additivity of the energy over non-interacting subsystems, which is qualitatively different in LDFT and CDFT, is also discussed.
Neural Network Assisted Inverse Dynamic Guidance for Terminally Constrained Entry Flight
Chen, Wanchun
2014-01-01
This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Miller, C. T.; Dye, A. L.; Gray, W. G.; McClure, J. E.; Rybak, I.
2015-12-01
The thermodynamically constrained averaging theory (TCAT) has been usedto formulate general classes of porous medium models, including newmodels for two-fluid-phase flow. The TCAT approach provides advantagesthat include a firm connection between the microscale, or pore scale,and the macroscale; a thermodynamically consistent basis; explicitinclusion of factors such as interfacial areas, contact angles,interfacial tension, and curvatures; and dynamics of interface movementand relaxation to an equilibrium state. In order to render the TCATmodel solvable, certain closure relations are needed to relate fluidpressure, interfacial areas, curvatures, and relaxation rates. In thiswork, we formulate and solve a TCAT-based two-fluid-phase flow model. We detail the formulation of the model, which is a specific instancefrom a hierarchy of two-fluid-phase flow models that emerge from thetheory. We show the closure problem that must be solved. Using recentresults from high-resolution microscale simulations, we advance a set ofclosure relations that produce a closed model. Lastly, we solve the model using a locally conservative numerical scheme and compare the TCAT model to the traditional model.
USDA-ARS?s Scientific Manuscript database
Fully biobased lubricants are those formulated using all biobased ingredients, i.e. biobased base oils and biobased additives. Such formulations provide the maximum environmental, safety, and economic benefits expected from a biobased product. Currently, there are a number of biobased base oils that...
A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.
NASA Astrophysics Data System (ADS)
Tate, Ranjeet Shekhar
1992-01-01
General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact) general relativity, the Hamiltonian is constrained to vanish. I present various approaches one can take to obtain an interpretation of the quantum theory of such "dynamically constrained" systems. I apply some of these ideas to the Bianchi I cosmology, and analyze the issue of the initial singularity in quantum theory.
Learning optimal embedded cascades.
Saberian, Mohammad Javad; Vasconcelos, Nuno
2012-10-01
The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.
Stage-discharge relationship in tidal channels
NASA Astrophysics Data System (ADS)
Kearney, W. S.; Mariotti, G.; Deegan, L.; Fagherazzi, S.
2016-12-01
Long-term records of the flow of water through tidal channels are essential to constrain the budgets of sediments and biogeochemical compounds in salt marshes. Statistical models which relate discharge to water level allow the estimation of such records from more easily obtained records of water stage in the channel. While there is clearly structure in the stage-discharge relationship, nonlinearity and nonstationarity of the relationship complicates the construction of statistical stage-discharge models with adequate performance for discharge estimation and uncertainty quantification. Here we compare four different types of stage-discharge models, each of which is designed to capture different characteristics of the stage-discharge relationship. We estimate and validate each of these models on a two-month long time series of stage and discharge obtained with an Acoustic Doppler Current Profiler in a salt marsh channel. We find that the best performance is obtained by models which account for the nonlinear and time-varying nature of the stage-discharge relationship. Good performance can also be obtained from a simplified version of these models which approximates the fully nonlinear and time-varying models with a piecewise linear formulation.
NASA Astrophysics Data System (ADS)
Anoukou, K.; Pastor, F.; Dufrenoy, P.; Kondo, D.
2016-06-01
The present two-part study aims at investigating the specific effects of Mohr-Coulomb matrix on the strength of ductile porous materials by using a kinematic limit analysis approach. While in the Part II, static and kinematic bounds are numerically derived and used for validation purpose, the present Part I focuses on the theoretical formulation of a macroscopic strength criterion for porous Mohr-Coulomb materials. To this end, we consider a hollow sphere model with a rigid perfectly plastic Mohr-Coulomb matrix, subjected to axisymmetric uniform strain rate boundary conditions. Taking advantage of an appropriate family of three-parameter trial velocity fields accounting for the specific plastic deformation mechanisms of the Mohr-Coulomb matrix, we then provide a solution of the constrained minimization problem required for the determination of the macroscopic dissipation function. The macroscopic strength criterion is then obtained by means of the Lagrangian method combined with Karush-Kuhn-Tucker conditions. After a careful analysis and discussion of the plastic admissibility condition associated to the Mohr-Coulomb criterion, the above procedure leads to a parametric closed-form expression of the macroscopic strength criterion. The latter explicitly shows a dependence on the three stress invariants. In the special case of a friction angle equal to zero, the established criterion reduced to recently available results for porous Tresca materials. Finally, both effects of matrix friction angle and porosity are briefly illustrated and, for completeness, the macroscopic plastic flow rule and the voids evolution law are fully furnished.
Thangarajah, Tanujan; Higgs, Deborah; Bayley, J I L; Lambert, Simon M
2016-01-01
AIM: To report the results of fixed-fulcrum fully constrained reverse shoulder arthroplasty for the treatment of recurrent shoulder instability in patients with epilepsy. METHODS: A retrospective review was conducted at a single facility. Cases were identified using a computerized database and all clinic notes and operative reports were reviewed. All patients with epilepsy and recurrent shoulder instability were included for study. Between July 2003 and August 2011 five shoulders in five consecutive patients with epilepsy underwent fixed-fulcrum fully constrained reverse shoulder arthroplasty for recurrent anterior shoulder instability. The mean duration of epilepsy in the cohort was 21 years (range, 5-51) and all patients suffered from grand mal seizures. RESULTS: Mean age at the time of surgery was 47 years (range, 32-64). The cohort consisted of four males and one female. Mean follow-up was 4.7 years (range, 4.3-5 years). There were no further episodes of instability, and no further stabilisation or revision procedures were performed. The mean Oxford shoulder instability score improved from 8 preoperatively (range, 5-15) to 30 postoperatively (range, 16-37) (P = 0.015) and the mean subjective shoulder value improved from 20 (range, 0-50) preoperatively to 60 (range, 50-70) postoperatively (P = 0.016). Mean active forward elevation improved from 71° preoperatively (range, 45°-130°) to 100° postoperatively (range, 80°-90°) and mean active external rotation improved from 15° preoperatively (range, 0°-30°) to 40° (20°-70°) postoperatively. No cases of scapular notching or loosening were noted. CONCLUSION: Fixed-fulcrum fully constrained reverse shoulder arthroplasty should be considered for the treatment of recurrent shoulder instability in patients with epilepsy. PMID:27458554
Kim, Hyunsoo; Park, Haesun
2007-06-15
Many practical pattern recognition problems require non-negativity constraints. For example, pixels in digital images and chemical concentrations in bioinformatics are non-negative. Sparse non-negative matrix factorizations (NMFs) are useful when the degree of sparseness in the non-negative basis matrix or the non-negative coefficient matrix in an NMF needs to be controlled in approximating high-dimensional data in a lower dimensional space. In this article, we introduce a novel formulation of sparse NMF and show how the new formulation leads to a convergent sparse NMF algorithm via alternating non-negativity-constrained least squares. We apply our sparse NMF algorithm to cancer-class discovery and gene expression data analysis and offer biological analysis of the results obtained. Our experimental results illustrate that the proposed sparse NMF algorithm often achieves better clustering performance with shorter computing time compared to other existing NMF algorithms. The software is available as supplementary material.
Wels, Michael; Carneiro, Gustavo; Aplas, Alexander; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin
2008-01-01
In this paper we present a fully automated approach to the segmentation of pediatric brain tumors in multi-spectral 3-D magnetic resonance images. It is a top-down segmentation approach based on a Markov random field (MRF) model that combines probabilistic boosting trees (PBT) and lower-level segmentation via graph cuts. The PBT algorithm provides a strong discriminative observation model that classifies tumor appearance while a spatial prior takes into account the pair-wise homogeneity in terms of classification labels and multi-spectral voxel intensities. The discriminative model relies not only on observed local intensities but also on surrounding context for detecting candidate regions for pathology. A mathematically sound formulation for integrating the two approaches into a unified statistical framework is given. The proposed method is applied to the challenging task of detection and delineation of pediatric brain tumors. This segmentation task is characterized by a high non-uniformity of both the pathology and the surrounding non-pathologic brain tissue. A quantitative evaluation illustrates the robustness of the proposed method. Despite dealing with more complicated cases of pediatric brain tumors the results obtained are mostly better than those reported for current state-of-the-art approaches to 3-D MR brain tumor segmentation in adult patients. The entire processing of one multi-spectral data set does not require any user interaction, and takes less time than previously proposed methods.
A chance-constrained stochastic approach to intermodal container routing problems.
Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.
A chance-constrained stochastic approach to intermodal container routing problems
Zhao, Yi; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389
Zhang, Yongsheng; Wei, Heng; Zheng, Kangning
2017-01-01
Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188
Simulation of Two-Phase Flow Based on a Thermodynamically Constrained Averaging Theory Flow Model
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Dye, A. L.; McClure, J. E.; Farthing, M. W.; Gray, W. G.; Miller, C. T.
2014-12-01
The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for two-fluid-phase flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as interfacial areas, contact angles, interfacial tension, and curvatures; and dynamics of interface movement and relaxation to an equilibrium state. In order to render the TCAT model solvable, certain closure relations are needed to relate fluid pressure, interfacial areas, curvatures, and relaxation rates. In this work, we formulate and solve a TCAT-based two-fluid-phase flow model. We detail the formulation of the model, which is a specific instance from a hierarchy of two-fluid-phase flow models that emerge from the theory. We show the closure problem that must be solved. Using recent results from high-resolution microscale simulations, we advance a set of closure relations that produce a closed model. Lastly, we use locally conservative spatial discretization and higher order temporal discretization methods to approximate the solution to this new model and compare the solution to the traditional model.
Eulerian Formulation of Spatially Constrained Elastic Rods
NASA Astrophysics Data System (ADS)
Huynen, Alexandre
Slender elastic rods are ubiquitous in nature and technology. For a vast majority of applications, the rod deflection is restricted by an external constraint and a significant part of the elastic body is in contact with a stiff constraining surface. The research work presented in this doctoral dissertation formulates a computational model for the solution of elastic rods constrained inside or around frictionless tube-like surfaces. The segmentation strategy adopted to cope with this complex class of problems consists in sequencing the global problem into, comparatively simpler, elementary problems either in continuous contact with the constraint or contact-free between their extremities. Within the conventional Lagrangian formulation of elastic rods, this approach is however associated with two major drawbacks. First, the boundary conditions specifying the locations of the rod centerline at both extremities of each elementary problem lead to the establishment of isoperimetric constraints, i.e., integral constraints on the unknown length of the rod. Second, the assessment of the unilateral contact condition requires, in principle, the comparison of two curves parametrized by distinct curvilinear coordinates, viz. the rod centerline and the constraint axis. Both conspire to burden the computations associated with the method. To streamline the solution along the elementary problems and rationalize the assessment of the unilateral contact condition, the rod governing equations are reformulated within the Eulerian framework of the constraint. The methodical exploration of both types of elementary problems leads to specific formulations of the rod governing equations that stress the profound connection between the mechanics of the rod and the geometry of the constraint surface. The proposed Eulerian reformulation, which restates the rod local equilibrium in terms of the curvilinear coordinate associated with the constraint axis, describes the rod deformed configuration by means of either its relative position with respect to the constraint axis (contact-free segments) or its angular position on the constraint surface (continuous contacts.) This formulation circumvents both drawbacks that afflict the conventional Lagrangian approach associated with the segmentation strategy. As the a priori unknown domain, viz. the rod length, is substituted for the known constraint axis, the free boundary problem and the associated isoperimetric constraints are converted into a classical two-point boundary value problem. Additionally, the description of the rod deflection by means of its eccentricity with respect to the constraint axis trivializes the assessment of the unilateral contact condition. Along continuous contacts, this formulation expresses the strain variables, measuring the rod change of shape, in terms of the geometric invariants of the constraint surface, and emphasizes the influence of the constraint local geometry on the reaction pressure. Formalizing the segmentation strategy, a computational model that exploits the Eulerian formulation of the rod governing equations is devised. To solve the quasi-static deflection of elastic rods constrained inside or around a tube-like surface, this computational model identifies the number of contacts, their nature (either discrete or continuous), and the rod configuration at the connections that satisfies the unilateral contact condition and preserves the rod integrity along the sequence of elementary problems.
Integrated Formulation of Beacon-Based Exception Analysis for Multimissions
NASA Technical Reports Server (NTRS)
Mackey, Ryan; James, Mark; Park, Han; Zak, Mickail
2003-01-01
Further work on beacon-based exception analysis for multimissions (BEAM), a method of real-time, automated diagnosis of a complex electromechanical systems, has greatly expanded its capability and suitability of application. This expanded formulation, which fully integrates physical models and symbolic analysis, is described. The new formulation of BEAM expands upon previous advanced techniques for analysis of signal data, utilizing mathematical modeling of the system physics, and expert-system reasoning,
NASA Technical Reports Server (NTRS)
Fijany, A.; Featherstone, R.
1999-01-01
This paper presents a new formulation of the Constraint Force Algorithm that corrects a major limitation in the original, and sheds new light on the relationship between it and other dynamics algoritms.
On optimal strategies in event-constrained differential games
NASA Technical Reports Server (NTRS)
Heymann, M.; Rajan, N.; Ardema, M.
1985-01-01
Combat games are formulated as zero-sum differential games with unilateral event constraints. An interior penalty function approach is employed to approximate optimal strategies for the players. The method is very attractive computationally and possesses suitable approximation and convergence properties.
Dirac structures in nonequilibrium thermodynamics
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Yoshimura, Hiroaki
2018-01-01
Dirac structures are geometric objects that generalize both Poisson structures and presymplectic structures on manifolds. They naturally appear in the formulation of constrained mechanical systems. In this paper, we show that the evolution equations for nonequilibrium thermodynamics admit an intrinsic formulation in terms of Dirac structures, both on the Lagrangian and the Hamiltonian settings. In the absence of irreversible processes, these Dirac structures reduce to canonical Dirac structures associated with canonical symplectic forms on phase spaces. Our geometric formulation of nonequilibrium thermodynamic thus consistently extends the geometric formulation of mechanics, to which it reduces in the absence of irreversible processes. The Dirac structures are associated with the variational formulation of nonequilibrium thermodynamics developed in the work of Gay-Balmaz and Yoshimura, J. Geom. Phys. 111, 169-193 (2017a) and are induced from a nonlinear nonholonomic constraint given by the expression of the entropy production of the system.
NASA Astrophysics Data System (ADS)
Reinoso, J.; Paggi, M.; Linder, C.
2017-06-01
Fracture of technological thin-walled components can notably limit the performance of their corresponding engineering systems. With the aim of achieving reliable fracture predictions of thin structures, this work presents a new phase field model of brittle fracture for large deformation analysis of shells relying on a mixed enhanced assumed strain (EAS) formulation. The kinematic description of the shell body is constructed according to the solid shell concept. This enables the use of fully three-dimensional constitutive models for the material. The proposed phase field formulation integrates the use of the (EAS) method to alleviate locking pathologies, especially Poisson thickness and volumetric locking. This technique is further combined with the assumed natural strain method to efficiently derive a locking-free solid shell element. On the computational side, a fully coupled monolithic framework is consistently formulated. Specific details regarding the corresponding finite element formulation and the main aspects associated with its implementation in the general purpose packages FEAP and ABAQUS are addressed. Finally, the applicability of the current strategy is demonstrated through several numerical examples involving different loading conditions, and including linear and nonlinear hyperelastic constitutive models.
NASA Astrophysics Data System (ADS)
Kees, C. E.; Miller, C. T.; Dimakopoulos, A.; Farthing, M.
2016-12-01
The last decade has seen an expansion in the development and application of 3D free surface flow models in the context of environmental simulation. These models are based primarily on the combination of effective algorithms, namely level set and volume-of-fluid methods, with high-performance, parallel computing. These models are still computationally expensive and suitable primarily when high-fidelity modeling near structures is required. While most research on algorithms and implementations has been conducted in the context of finite volume methods, recent work has extended a class of level set schemes to finite element methods on unstructured methods. This work considers models of three-phase flow in domains containing air, water, and granular phases. These multi-phase continuum mechanical formulations show great promise for applications such as analysis of coastal and riverine structures. This work will consider formulations proposed in the literature over the last decade as well as new formulations derived using the thermodynamically constrained averaging theory, an approach to deriving and closing macroscale continuum models for multi-phase and multi-component processes. The target applications require the ability to simulate wave breaking and structure over-topping, particularly fully three-dimensional, non-hydrostatic flows that drive these phenomena. A conservative level set scheme suitable for higher-order finite element methods is used to describe the air/water phase interaction. The interaction of these air/water flows with granular materials, such as sand and rubble, must also be modeled. The range of granular media dynamics targeted including flow and wave transmision through the solid media as well as erosion and deposition of granular media and moving bed dynamics. For the granular phase we consider volume- and time-averaged continuum mechanical formulations that are discretized with the finite element method and coupled to the underlying air/water flow via operator splitting (fractional step) schemes. Particular attention will be given to verification and validation of the numerical model and important qualitative features of the numerical methods including phase conservation, wave energy dissipation, and computational efficiency in regimes of interest.
COPS: Large-scale nonlinearly constrained optimization problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bondarenko, A.S.; Bortz, D.M.; More, J.J.
2000-02-10
The authors have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization Problems. The primary purpose of this collection is to provide difficult test cases for optimization software. Problems in the current version of the collection come from fluid dynamics, population dynamics, optimal design, and optimal control. For each problem they provide a short description of the problem, notes on the formulation of the problem, and results of computational experiments with general optimization solvers. They currently have results for DONLP2, LANCELOT, MINOS, SNOPT, and LOQO.
Energy efficient LED layout optimization for near-uniform illumination
NASA Astrophysics Data System (ADS)
Ali, Ramy E.; Elgala, Hany
2016-09-01
In this paper, we consider the problem of designing energy efficient light emitting diodes (LEDs) layout while satisfying the illumination constraints. Towards this objective, we present a simple approach to the illumination design problem based on the concept of the virtual LED. We formulate a constrained optimization problem for minimizing the power consumption while maintaining a near-uniform illumination throughout the room. By solving the resulting constrained linear program, we obtain the number of required LEDs and the optimal output luminous intensities that achieve the desired illumination constraints.
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
2015-06-22
Galerkin methodology formulated in the framework of the residual-distribution method. For both second- and third- 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...construct these schemes based on the Low-Diffusion-A and the Streamwise-Upwind-Petrov-Galerkin methodology formulated in the framework of the residual...methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit
Factors influencing alcohol safety action project police officers' DWI arrests
DOT National Transportation Integrated Search
1974-04-29
This report summarizes the results of a study to determine the factors influencing ASAP police officers' DWI arrests and the formulation of approaches to minimize the influence of those factors which might tend to constrain the arrest of persons who ...
Hierarchically Parallelized Constrained Nonlinear Solvers with Automated Substructuring
NASA Technical Reports Server (NTRS)
Padovan, Joe; Kwang, Abel
1994-01-01
This paper develops a parallelizable multilevel multiple constrained nonlinear equation solver. The substructuring process is automated to yield appropriately balanced partitioning of each succeeding level. Due to the generality of the procedure,_sequential, as well as partially and fully parallel environments can be handled. This includes both single and multiprocessor assignment per individual partition. Several benchmark examples are presented. These illustrate the robustness of the procedure as well as its capability to yield significant reductions in memory utilization and calculational effort due both to updating and inversion.
NASA Astrophysics Data System (ADS)
Chen, Miawjane; Yan, Shangyao; Wang, Sin-Siang; Liu, Chiu-Lan
2015-02-01
An effective project schedule is essential for enterprises to increase their efficiency of project execution, to maximize profit, and to minimize wastage of resources. Heuristic algorithms have been developed to efficiently solve the complicated multi-mode resource-constrained project scheduling problem with discounted cash flows (MRCPSPDCF) that characterize real problems. However, the solutions obtained in past studies have been approximate and are difficult to evaluate in terms of optimality. In this study, a generalized network flow model, embedded in a time-precedence network, is proposed to formulate the MRCPSPDCF with the payment at activity completion times. Mathematically, the model is formulated as an integer network flow problem with side constraints, which can be efficiently solved for optimality, using existing mathematical programming software. To evaluate the model performance, numerical tests are performed. The test results indicate that the model could be a useful planning tool for project scheduling in the real world.
Canonical gravity, diffeomorphisms and objective histories
NASA Astrophysics Data System (ADS)
Samuel, Joseph
2000-11-01
This paper discusses the implementation of diffeomorphism invariance in purely Hamiltonian formulations of general relativity. We observe that, if a constrained Hamiltonian formulation derives from a manifestly covariant Lagrangian, the diffeomorphism invariance of the Lagrangian results in the following properties of the constrained Hamiltonian theory: the diffeomorphisms are generated by constraints on the phase space so that: (a) the algebra of the generators reflects the algebra of the diffeomorphism group; (b) the Poisson brackets of the basic fields with the generators reflects the spacetime transformation properties of these basic fields. This suggests that in a purely Hamiltonian approach the requirement of diffeomorphism invariance should be interpreted to include (b) and not just (a) as one might naively suppose. Giving up (b) amounts to giving up objective histories, even at the classical level. This observation has implications for loop quantum gravity which are spelled out in a companion paper. We also describe an analogy between canonical gravity and relativistic particle dynamics to illustrate our main point.
NASA Astrophysics Data System (ADS)
Singh, Gaurav; Krishnan, Girish
2017-06-01
Fiber reinforced elastomeric enclosures (FREEs) are soft and smart pneumatic actuators that deform in a predetermined fashion upon inflation. This paper analyzes the deformation behavior of FREEs by formulating a simple calculus of variations problem that involves constrained maximization of the enclosed volume. The model accurately captures the deformed shape for FREEs with any general fiber angle orientation, and its relation with actuation pressure, material properties and applied load. First, the accuracy of the model is verified with existing literature and experiments for the popular McKibben pneumatic artificial muscle actuator with two equal and opposite families of helically wrapped fibers. Then, the model is used to predict and experimentally validate the deformation behavior of novel rotating-contracting FREEs, for which no prior literature exist. The generality of the model enables conceptualization of novel FREEs whose fiber orientations vary arbitrarily along the geometry. Furthermore, the model is deemed to be useful in the design synthesis of fiber reinforced elastomeric actuators for general axisymmetric desired motion and output force requirement.
Advanced Computational Methods for Security Constrained Financial Transmission Rights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria
Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulationmore » of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.« less
Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.
Caro, J Jaime
2016-07-01
Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.
Wu, Sheng; Jin, Qibing; Zhang, Ridong; Zhang, Junfeng; Gao, Furong
2017-07-01
In this paper, an improved constrained tracking control design is proposed for batch processes under uncertainties. A new process model that facilitates process state and tracking error augmentation with further additional tuning is first proposed. Then a subsequent controller design is formulated using robust stable constrained MPC optimization. Unlike conventional robust model predictive control (MPC), the proposed method enables the controller design to bear more degrees of tuning so that improved tracking control can be acquired, which is very important since uncertainties exist inevitably in practice and cause model/plant mismatches. An injection molding process is introduced to illustrate the effectiveness of the proposed MPC approach in comparison with conventional robust MPC. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A decoupled recursive approach for constrained flexible multibody system dynamics
NASA Technical Reports Server (NTRS)
Lai, Hao-Jan; Kim, Sung-Soo; Haug, Edward J.; Bae, Dae-Sung
1989-01-01
A variational-vector calculus approach is employed to derive a recursive formulation for dynamic analysis of flexible multibody systems. Kinematic relationships for adjacent flexible bodies are derived in a companion paper, using a state vector notation that represents translational and rotational components simultaneously. Cartesian generalized coordinates are assigned for all body and joint reference frames, to explicitly formulate deformation kinematics under small deformation kinematics and an efficient flexible dynamics recursive algorithm is developed. Dynamic analysis of a closed loop robot is performed to illustrate efficiency of the algorithm.
Evaluating an image-fusion algorithm with synthetic-image-generation tools
NASA Astrophysics Data System (ADS)
Gross, Harry N.; Schott, John R.
1996-06-01
An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
Study of constrained minimal supersymmetry
NASA Astrophysics Data System (ADS)
Kane, G. L.; Kolda, Chris; Roszkowski, Leszek; Wells, James D.
1994-06-01
Taking seriously the phenomenological indications for supersymmetry we have made a detailed study of unified minimal SUSY, including many effects at the few percent level in a consistent fashion. We report here a general analysis of what can be studied without choosing a particular gauge group at the unification scale. Firstly, we find that the encouraging SUSY unification results of recent years do survive the challenge of a more complete and accurate analysis. Taking into account effects at the 5-10 % level leads to several improvements of previous results and allows us to sharpen our predictions for SUSY in the light of unification. We perform a thorough study of the parameter space and look for patterns to indicate SUSY predictions, so that they do not depend on arbitrary choices of some parameters or untested assumptions. Our results can be viewed as a fully constrained minimal SUSY standard model. The resulting model forms a well-defined basis for comparing the physics potential of different facilities. Very little of the acceptable parameter space has been excluded by CERN LEP or Fermilab so far, but a significant fraction can be covered when these accelerators are upgraded. A number of initial applications to the understanding of the values of mh and mt, the SUSY spectrum, detectability of SUSY at LEP II or Fermilab, B(b-->sγ), Γ(Z-->bb¯), dark matter, etc., are included in a separate section that might be of more interest to some readers than the technical aspects of model building. We formulate an approach to extracting SUSY parameters from data when superpartners are detected. For small tanβ or large mt both m1/2 and m0 are entirely bounded from above at ~1 TeV without having to use a fine-tuning constraint.
NASA Astrophysics Data System (ADS)
Hagemann, M.; Gleason, C. J.
2017-12-01
The upcoming (2021) Surface Water and Ocean Topography (SWOT) NASA satellite mission aims, in part, to estimate discharge on major rivers worldwide using reach-scale measurements of stream width, slope, and height. Current formalizations of channel and floodplain hydraulics are insufficient to fully constrain this problem mathematically, resulting in an infinitely large solution set for any set of satellite observations. Recent work has reformulated this problem in a Bayesian statistical setting, in which the likelihood distributions derive directly from hydraulic flow-law equations. When coupled with prior distributions on unknown flow-law parameters, this formulation probabilistically constrains the parameter space, and results in a computationally tractable description of discharge. Using a curated dataset of over 200,000 in-situ acoustic Doppler current profiler (ADCP) discharge measurements from over 10,000 USGS gaging stations throughout the United States, we developed empirical prior distributions for flow-law parameters that are not observable by SWOT, but that are required in order to estimate discharge. This analysis quantified prior uncertainties on quantities including cross-sectional area, at-a-station hydraulic geometry width exponent, and discharge variability, that are dependent on SWOT-observable variables including reach-scale statistics of width and height. When compared against discharge estimation approaches that do not use this prior information, the Bayesian approach using ADCP-derived priors demonstrated consistently improved performance across a range of performance metrics. This Bayesian approach formally transfers information from in-situ gaging stations to remote-sensed estimation of discharge, in which the desired quantities are not directly observable. Further investigation using large in-situ datasets is therefore a promising way forward in improving satellite-based estimates of river discharge.
Gold, Harris; Joback, Kevin; Geis, Steven; Bowman, George; Mericas, Dean; Corsi, Steven R.; Ferguson, Lee
2010-01-01
The current research was conducted to identify alternative aircraft and pavement deicer and anti-icer formulations with improved environmental characteristics compared to currently used commercial products (2007). The environmental characteristics of primary concern are the biochemical oxygen demand (BOD) and aquatic toxicity of the fully formulated products. Except when the distinction among products is necessary for clarity, “deicer” will refer to aircraft-deicing fluids (ADFs), aircraft anti-icing fluids (AAFs), and pavementdeicing materials (PDMs).
Energetic Materials Optimization via Constrained Search
2015-06-01
steps. 3. Optimization Methodology Our optimization problem is formulated as a constrained maximization: max x∈CCS P (x) s.t. : TED ( x )− 9.75 ≥ 0 SV (x)− 9...0 5− SA(x) ≥ 0, (1) where TED ( x ) is the total energy of detonation (TED) of compound x from the chosen chemical subspace (CCS) of chemical compound...max problem, max x∈CCS min λ∈R3+ P (x)− λTC(x), (2) where C(x) is the vector of constraint violations, i.e., η(9.75 − TED ( x )), η(9 − SV (x)), η(SA(x
Continuous spin fields of mixed-symmetry type
NASA Astrophysics Data System (ADS)
Alkalaev, Konstantin; Grigoriev, Maxim
2018-03-01
We propose a description of continuous spin massless fields of mixed-symmetry type in Minkowski space at the level of equations of motion. It is based on the appropriately modified version of the constrained system originally used to describe massless bosonic fields of mixed-symmetry type. The description is shown to produce generalized versions of triplet, metric-like, and light-cone formulations. In particular, for scalar continuous spin fields we reproduce the Bekaert-Mourad formulation and the Schuster-Toro formulation. Because a continuous spin system inevitably involves infinite number of fields, specification of the allowed class of field configurations becomes a part of its definition. We show that the naive choice leads to an empty system and propose a suitable class resulting in the correct degrees of freedom. We also demonstrate that the gauge symmetries present in the formulation are all Stueckelberg-like so that the continuous spin system is not a genuine gauge theory.
New nonlinear control algorithms for multiple robot arms
NASA Technical Reports Server (NTRS)
Tarn, T. J.; Bejczy, A. K.; Yun, X.
1988-01-01
Multiple coordinated robot arms are modeled by considering the arms as closed kinematic chains and as a force-constrained mechanical system working on the same object simultaneously. In both formulations, a novel dynamic control method is discussed. It is based on feedback linearization and simultaneous output decoupling technique. By applying a nonlinear feedback and a nonlinear coordinate transformation, the complicated model of the multiple robot arms in either formulation is converted into a linear and output decoupled system. The linear system control theory and optimal control theory are used to design robust controllers in the task space. The first formulation has the advantage of automatically handling the coordination and load distribution among the robot arms. In the second formulation, it was found that by choosing a general output equation it became possible simultaneously to superimpose the position and velocity error feedback with the force-torque error feedback in the task space.
Formulating viscous hydrodynamics for large velocity gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratt, Scott
2008-02-15
Viscous corrections to relativistic hydrodynamics, which are usually formulated for small velocity gradients, have recently been extended from Navier-Stokes formulations to a class of treatments based on Israel-Stewart equations. Israel-Stewart treatments, which treat the spatial components of the stress-energy tensor {tau}{sub ij} as dynamical objects, introduce new parameters, such as the relaxation times describing nonequilibrium behavior of the elements {tau}{sub ij}. By considering linear response theory and entropy constraints, we show how the additional parameters are related to fluctuations of {tau}{sub ij}. Furthermore, the Israel-Stewart parameters are analyzed for their ability to provide stable and physical solutions for sound waves.more » Finally, it is shown how these parameters, which are naturally described by correlation functions in real time, might be constrained by lattice calculations, which are based on path-integral formulations in imaginary time.« less
Estimating free-body modal parameters from tests of a constrained structure
NASA Technical Reports Server (NTRS)
Cooley, Victor M.
1993-01-01
Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.
Bryan, Stephen; Lilien, Steven
2003-10-01
Regulators are trying to clear up the muddle created by earnings-report adjustments called "pro formas" that companies issue. Constraining such reporting, as the regulators seem bent on doing, isn't the solution. Firms should increase alternative reporting--and fully account for their accounting.
A second-generation constrained reaction volume shock tube
NASA Astrophysics Data System (ADS)
Campbell, M. F.; Tulgestke, A. M.; Davidson, D. F.; Hanson, R. K.
2014-05-01
We have developed a shock tube that features a sliding gate valve in order to mechanically constrain the reactive test gas mixture to an area close to the shock tube endwall, separating it from a specially formulated non-reactive buffer gas mixture. This second-generation Constrained Reaction Volume (CRV) strategy enables near-constant-pressure shock tube test conditions for reactive experiments behind reflected shocks, thereby enabling improved modeling of the reactive flow field. Here we provide details of the design and operation of the new shock tube. In addition, we detail special buffer gas tailoring procedures, analyze the buffer/test gas interactions that occur on gate valve opening, and outline the size range of fuels that can be studied using the CRV technique in this facility. Finally, we present example low-temperature ignition delay time data to illustrate the CRV shock tube's performance.
An Assessment of the State-of-the-Art in Multidisciplinary Aeromechanical Analyses
2008-01-01
monolithic formulations. In summary, for aerospace structures, partitioned formulations provide fundamental advantages over fully coupled ones, in addition...important frequencies of local analysis directly to global analysis using detailed modeling. Performed ju- diciously, based on a fundamental understanding of...in 2000 has com- prehensively described the problem, and reviewed the status of fundamental understanding, experimental data, and analytical
Dynamics of Compressible Convection and Thermochemical Mantle Convection
NASA Astrophysics Data System (ADS)
Liu, Xi
The Earth's long-wavelength geoid anomalies have long been used to constrain the dynamics and viscosity structure of the mantle in an isochemical, whole-mantle convection model. However, there is strong evidence that the seismically observed large low shear velocity provinces (LLSVPs) in the lowermost mantle are chemically distinct and denser than the ambient mantle. In this thesis, I investigated how chemically distinct and dense piles influence the geoid. I formulated dynamically self-consistent 3D spherical convection models with realistic mantle viscosity structure which reproduce Earth's dominantly spherical harmonic degree-2 convection. The models revealed a compensation effect of the chemically dense LLSVPs. Next, I formulated instantaneous flow models based on seismic tomography to compute the geoid and constrain mantle viscosity assuming thermochemical convection with the compensation effect. Thermochemical models reconcile the geoid observations. The viscosity structure inverted for thermochemical models is nearly identical to that of whole-mantle models, and both prefer weak transition zone. Our results have implications for mineral physics, seismic tomographic studies, and mantle convection modelling. Another part of this thesis describes analyses of the influence of mantle compressibility on thermal convection in an isoviscous and compressible fluid with infinite Prandtl number. A new formulation of the propagator matrix method is implemented to compute the critical Rayleigh number and the corresponding eigenfunctions for compressible convection. Heat flux and thermal boundary layer properties are quantified in numerical models and scaling laws are developed.
Constraining Assertion: An Account of Context-Sensitivity
ERIC Educational Resources Information Center
Villanueva Chigne, Eduardo
2012-01-01
Many philosophers believe that if "S" is an unambiguous, context-sensitive, declarative sentence and "p" is a proposition asserted (without conversational implicatures) by a literal utterance of "S" in a context "c," then "p" is fully determined by the linguistic meaning of "S" in…
Disturbance patterns in a socio-ecological system at multiple scales
G. Zurlini; Kurt H. Riitters; N. Zaccarelli; I. Petrosillo; K.B. Jones; L. Rossi
2006-01-01
Ecological systems with hierarchical organization and non-equilibrium dynamics require multiple-scale analyses to comprehend how a system is structured and to formulate hypotheses about regulatory mechanisms. Characteristic scales in real landscapes are determined by, or at least reflect, the spatial patterns and scales of constraining human interactions with the...
Constraint elimination in dynamical systems
NASA Technical Reports Server (NTRS)
Singh, R. P.; Likins, P. W.
1989-01-01
Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.
Microgrid Optimal Scheduling With Chance-Constrained Islanding Capability
Liu, Guodong; Starke, Michael R.; Xiao, B.; ...
2017-01-13
To facilitate the integration of variable renewable generation and improve the resilience of electricity sup-ply in a microgrid, this paper proposes an optimal scheduling strategy for microgrid operation considering constraints of islanding capability. A new concept, probability of successful islanding (PSI), indicating the probability that a microgrid maintains enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation after instantaneously islanding from the main grid, is developed. The PSI is formulated as mixed-integer linear program using multi-interval approximation taking into account the probability distributions of forecast errors of wind, PV and load. With themore » goal of minimizing the total operating cost while preserving user specified PSI, a chance-constrained optimization problem is formulated for the optimal scheduling of mirogrids and solved by mixed integer linear programming (MILP). Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling strategy. Lastly, we verify the relationship between PSI and various factors.« less
Optimal apodization design for medical ultrasound using constrained least squares part I: theory.
Guenther, Drake A; Walker, William F
2007-02-01
Aperture weighting functions are critical design parameters in the development of ultrasound systems because beam characteristics affect the contrast and point resolution of the final output image. In previous work by our group, we developed a metric that quantifies a broadband imaging system's contrast resolution performance. We now use this metric to formulate a novel general ultrasound beamformer design method. In our algorithm, we use constrained least squares (CLS) techniques and a linear algebra formulation to describe the system point spread function (PSF) as a function of the aperture weightings. In one approach, we minimize the energy of the PSF outside a certain boundary and impose a linear constraint on the aperture weights. In a second approach, we minimize the energy of the PSF outside a certain boundary while imposing a quadratic constraint on the energy of the PSF inside the boundary. We present detailed analysis for an arbitrary ultrasound imaging system and discuss several possible applications of the CLS techniques, such as designing aperture weightings to maximize contrast resolution and improve the system depth of field.
NASA Astrophysics Data System (ADS)
Musharbash, Eleonora; Nobile, Fabio
2018-02-01
In this paper we propose a method for the strong imposition of random Dirichlet boundary conditions in the Dynamical Low Rank (DLR) approximation of parabolic PDEs and, in particular, incompressible Navier Stokes equations. We show that the DLR variational principle can be set in the constrained manifold of all S rank random fields with a prescribed value on the boundary, expressed in low rank format, with rank smaller then S. We characterize the tangent space to the constrained manifold by means of a Dual Dynamically Orthogonal (Dual DO) formulation, in which the stochastic modes are kept orthonormal and the deterministic modes satisfy suitable boundary conditions, consistent with the original problem. The Dual DO formulation is also convenient to include the incompressibility constraint, when dealing with incompressible Navier Stokes equations. We show the performance of the proposed Dual DO approximation on two numerical test cases: the classical benchmark of a laminar flow around a cylinder with random inflow velocity, and a biomedical application for simulating blood flow in realistic carotid artery reconstructed from MRI data with random inflow conditions coming from Doppler measurements.
Constrained model predictive control, state estimation and coordination
NASA Astrophysics Data System (ADS)
Yan, Jun
In this dissertation, we study the interaction between the control performance and the quality of the state estimation in a constrained Model Predictive Control (MPC) framework for systems with stochastic disturbances. This consists of three parts: (i) the development of a constrained MPC formulation that adapts to the quality of the state estimation via constraints; (ii) the application of such a control law in a multi-vehicle formation coordinated control problem in which each vehicle operates subject to a no-collision constraint posed by others' imperfect prediction computed from finite bit-rate, communicated data; (iii) the design of the predictors and the communication resource assignment problem that satisfy the performance requirement from Part (ii). Model Predictive Control (MPC) is of interest because it is one of the few control design methods which preserves standard design variables and yet handles constraints. MPC is normally posed as a full-state feedback control and is implemented in a certainty-equivalence fashion with best estimates of the states being used in place of the exact state. However, if the state constraints were handled in the same certainty-equivalence fashion, the resulting control law could drive the real state to violate the constraints frequently. Part (i) focuses on exploring the inclusion of state estimates into the constraints. It does this by applying constrained MPC to a system with stochastic disturbances. The stochastic nature of the problem requires re-posing the constraints in a probabilistic form. In Part (ii), we consider applying constrained MPC as a local control law in a coordinated control problem of a group of distributed autonomous systems. Interactions between the systems are captured via constraints. First, we inspect the application of constrained MPC to a completely deterministic case. Formation stability theorems are derived for the subsystems and conditions on the local constraint set are derived in order to guarantee local stability or convergence to a target state. If these conditions are met for all subsystems, then this stability is inherited by the overall system. For the case when each subsystem suffers from disturbances in the dynamics, own self-measurement noises, and quantization errors on neighbors' information due to the finite-bit-rate channels, the constrained MPC strategy developed in Part (i) is appropriate to apply. In Part (iii), we discuss the local predictor design and bandwidth assignment problem in a coordinated vehicle formation context. The MPC controller used in Part (ii) relates the formation control performance and the information quality in the way that large standoff implies conservative performance. We first develop an LMI (Linear Matrix Inequality) formulation for cross-estimator design in a simple two-vehicle scenario with non-standard information: one vehicle does not have access to the other's exact control value applied at each sampling time, but to its known, pre-computed, coupling linear feedback control law. Then a similar LMI problem is formulated for the bandwidth assignment problem that minimizes the total number of bits by adjusting the prediction gain matrices and the number of bits assigned to each variable. (Abstract shortened by UMI.)
Diameter-Constrained Steiner Tree
NASA Astrophysics Data System (ADS)
Ding, Wei; Lin, Guohui; Xue, Guoliang
Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.
Onward through the Fog: Uncertainty and Management Adaptation in Systems Analysis and Design
1990-07-01
has fallen into stereotyped problem formulations and analytical ap- proaches. In particular, treatments of uncertainty are typically quite incomplete...and often conceptually wrong. This report argues that these shortcomings produce pervasive systematic biases in analyses. Problem formulations ...capability were lost. The expected number of aircraft that would not be fully mission capable thirty days later was roughly twice the num - ber
Uncertain dynamical systems: A differential game approach
NASA Technical Reports Server (NTRS)
Gutman, S.
1976-01-01
A class of dynamical systems in a conflict situation is formulated and discussed, and the formulation is applied to the study of an important class of systems in the presence of uncertainty. The uncertainty is deterministic and the only assumption is that its value belongs to a known compact set. Asymptotic stability is fully discussed with application to variable structure and model reference control systems.
Haineault, Caroline; Gourde, Pierrette; Perron, Sylvie; Désormeaux, André; Piret, Jocelyne; Omar, Rabeea F; Tremblay, Roland R; Bergeron, Michel G
2003-08-01
The contraceptive properties of a gel formulation containing sodium lauryl sulfate were investigated in both in vitro and in vivo models. Results showed that sodium lauryl sulfate inhibited, in a concentration-dependent manner, the activity of sheep testicular hyaluronidase. Sodium lauryl sulfate also completely inhibited human sperm motility as evaluated by the 30-sec Sander-Cramer test. The acid-buffering capacity of gel formulations containing sodium lauryl sulfate increased with the molarity of the citrate buffers used for their preparations. Furthermore, experiments in which semen was mixed with undiluted gel formulations in different proportions confirmed their physiologically relevant buffering capacity. Intravaginal application of the gel formulation containing sodium lauryl sulfate to rabbits before their artificial insemination with freshly ejaculated semen completely prevented egg fertilization. The gel formulation containing sodium lauryl sulfate was fully compatible with nonlubricated latex condoms. Taken together, these results suggest that the gel formulation containing sodium lauryl sulfate could represent a potential candidate for use as a topical vaginal spermicidal formulation to provide fertility control in women.
Encapsulated Ball Bearings for Rotary Micro Machines
2007-01-01
maintaining fabrication simplicity and stability. Although ball bearings have been demonstrated in devices such as linear micromotors [6, 7] and rotary... micromotors [8], they have yet to be integrated into the microfabrication process to fully constrain the dynamic element. In the cases of both Modafe et
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
NASA Technical Reports Server (NTRS)
Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.
2011-01-01
Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.
Minimal complexity control law synthesis
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.
1989-01-01
A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.
Mixed finite-element formulations in piezoelectricity and flexoelectricity
2016-01-01
Flexoelectricity, the linear coupling of strain gradient and electric polarization, is inherently a size-dependent phenomenon. The energy storage function for a flexoelectric material depends not only on polarization and strain, but also strain-gradient. Thus, conventional finite-element methods formulated solely on displacement are inadequate to treat flexoelectric solids since gradients raise the order of the governing differential equations. Here, we introduce a computational framework based on a mixed formulation developed previously by one of the present authors and a colleague. This formulation uses displacement and displacement-gradient as separate variables which are constrained in a ‘weighted integral sense’ to enforce their known relation. We derive a variational formulation for boundary-value problems for piezo- and/or flexoelectric solids. We validate this computational framework against available exact solutions. Our new computational method is applied to more complex problems, including a plate with an elliptical hole, stationary cracks, as well as tension and shear of solids with a repeating unit cell. Our results address several issues of theoretical interest, generate predictions of experimental merit and reveal interesting flexoelectric phenomena with potential for application. PMID:27436967
Mixed finite-element formulations in piezoelectricity and flexoelectricity.
Mao, Sheng; Purohit, Prashant K; Aravas, Nikolaos
2016-06-01
Flexoelectricity, the linear coupling of strain gradient and electric polarization, is inherently a size-dependent phenomenon. The energy storage function for a flexoelectric material depends not only on polarization and strain, but also strain-gradient. Thus, conventional finite-element methods formulated solely on displacement are inadequate to treat flexoelectric solids since gradients raise the order of the governing differential equations. Here, we introduce a computational framework based on a mixed formulation developed previously by one of the present authors and a colleague. This formulation uses displacement and displacement-gradient as separate variables which are constrained in a 'weighted integral sense' to enforce their known relation. We derive a variational formulation for boundary-value problems for piezo- and/or flexoelectric solids. We validate this computational framework against available exact solutions. Our new computational method is applied to more complex problems, including a plate with an elliptical hole, stationary cracks, as well as tension and shear of solids with a repeating unit cell. Our results address several issues of theoretical interest, generate predictions of experimental merit and reveal interesting flexoelectric phenomena with potential for application.
Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping
2017-01-01
Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid “particle degeneracy” problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network. PMID:29267252
Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping
2017-12-21
Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.
A Mass Tracking Formulation for Bubbles in Incompressible Flow
2012-10-14
incompressible flow to fully nonlinear compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of...using the ideas from [19] to couple together incompressible flow with fully nonlinear compressible flow including shocks and rarefactions . The results...compressible flow including the effects of shocks and rarefactions , and then subsequently making a number of simplifying assumptions on the air flow
A fully Sinc-Galerkin method for Euler-Bernoulli beam models
NASA Technical Reports Server (NTRS)
Smith, R. C.; Bowers, K. L.; Lund, J.
1990-01-01
A fully Sinc-Galerkin method in both space and time is presented for fourth-order time-dependent partial differential equations with fixed and cantilever boundary conditions. The Sinc discretizations for the second-order temporal problem and the fourth-order spatial problems are presented. Alternate formulations for variable parameter fourth-order problems are given which prove to be especially useful when applying the forward techniques to parameter recovery problems. The discrete system which corresponds to the time-dependent partial differential equations of interest are then formulated. Computational issues are discussed and a robust and efficient algorithm for solving the resulting matrix system is outlined. Numerical results which highlight the method are given for problems with both analytic and singular solutions as well as fixed and cantilever boundary conditions.
NASA Astrophysics Data System (ADS)
Stefanski, Douglas Lawrence
A finite volume method for solving the Reynolds Averaged Navier-Stokes (RANS) equations on unstructured hybrid grids is presented. Capabilities for handling arbitrary mixtures of reactive gas species within the unstructured framework are developed. The modeling of turbulent effects is carried out via the 1998 Wilcox k -- o model. This unstructured solver is incorporated within VULCAN -- a multi-block structured grid code -- as part of a novel patching procedure in which non-matching interfaces between structured blocks are replaced by transitional unstructured grids. This approach provides a fully-conservative alternative to VULCAN's non-conservative patching methods for handling such interfaces. In addition, the further development of the standalone unstructured solver toward large-eddy simulation (LES) applications is also carried out. Dual time-stepping using a Crank-Nicholson formulation is added to recover time-accuracy, and modeling of sub-grid scale effects is incorporated to provide higher fidelity LES solutions for turbulent flows. A switch based on the work of Ducros, et al., is implemented to transition from a monotonicity-preserving flux scheme near shocks to a central-difference method in vorticity-dominated regions in order to better resolve small-scale turbulent structures. The updated unstructured solver is used to carry out large-eddy simulations of a supersonic constrained mixing layer.
A Gauss-Newton full-waveform inversion in PML-truncated domains using scalar probing waves
NASA Astrophysics Data System (ADS)
Pakravan, Alireza; Kang, Jun Won; Newtson, Craig M.
2017-12-01
This study considers the characterization of subsurface shear wave velocity profiles in semi-infinite media using scalar waves. Using surficial responses caused by probing waves, a reconstruction of the material profile is sought using a Gauss-Newton full-waveform inversion method in a two-dimensional domain truncated by perfectly matched layer (PML) wave-absorbing boundaries. The PML is introduced to limit the semi-infinite extent of the half-space and to prevent reflections from the truncated boundaries. A hybrid unsplit-field PML is formulated in the inversion framework to enable more efficient wave simulations than with a fully mixed PML. The full-waveform inversion method is based on a constrained optimization framework that is implemented using Karush-Kuhn-Tucker (KKT) optimality conditions to minimize the objective functional augmented by PML-endowed wave equations via Lagrange multipliers. The KKT conditions consist of state, adjoint, and control problems, and are solved iteratively to update the shear wave velocity profile of the PML-truncated domain. Numerical examples show that the developed Gauss-Newton inversion method is accurate enough and more efficient than another inversion method. The algorithm's performance is demonstrated by the numerical examples including the case of noisy measurement responses and the case of reduced number of sources and receivers.
Finite Element Analysis in Concurrent Processing: Computational Issues
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett
2004-01-01
The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.
Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering
NASA Technical Reports Server (NTRS)
Cole, Bjorn F.; Jenkins, J. Steven
2015-01-01
In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.
Isotretinoin Oil-Based Capsule Formulation Optimization
Tsai, Pi-Ju; Huang, Chi-Te; Lee, Chen-Chou; Li, Chi-Lin; Huang, Yaw-Bin; Tsai, Yi-Hung; Wu, Pao-Chu
2013-01-01
The purpose of this study was to develop and optimize an isotretinoin oil-based capsule with specific dissolution pattern. A three-factor-constrained mixture design was used to prepare the systemic model formulations. The independent factors were the components of oil-based capsule including beeswax (X 1), hydrogenated coconut oil (X 2), and soybean oil (X 3). The drug release percentages at 10, 30, 60, and 90 min were selected as responses. The effect of formulation factors including that on responses was inspected by using response surface methodology (RSM). Multiple-response optimization was performed to search for the appropriate formulation with specific release pattern. It was found that the interaction effect of these formulation factors (X 1 X 2, X 1 X 3, and X 2 X 3) showed more potential influence than that of the main factors (X 1, X 2, and X 3). An optimal predicted formulation with Y 10 min, Y 30 min, Y 60 min, and Y 90 min release values of 12.3%, 36.7%, 73.6%, and 92.7% at X 1, X 2, and X 3 of 5.75, 15.37, and 78.88, respectively, was developed. The new formulation was prepared and performed by the dissolution test. The similarity factor f 2 was 54.8, indicating that the dissolution pattern of the new optimized formulation showed equivalence to the predicted profile. PMID:24068886
Formulation of image fusion as a constrained least squares optimization problem
Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge
2017-01-01
Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885
CAD of control systems: Application of nonlinear programming to a linear quadratic formulation
NASA Technical Reports Server (NTRS)
Fleming, P.
1983-01-01
The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.
A Novel Implementation of Massively Parallel Three Dimensional Monte Carlo Radiation Transport
NASA Astrophysics Data System (ADS)
Robinson, P. B.; Peterson, J. D. L.
2005-12-01
The goal of our summer project was to implement the difference formulation for radiation transport into Cosmos++, a multidimensional, massively parallel, magneto hydrodynamics code for astrophysical applications (Peter Anninos - AX). The difference formulation is a new method for Symbolic Implicit Monte Carlo thermal transport (Brooks and Szöke - PAT). Formerly, simultaneous implementation of fully implicit Monte Carlo radiation transport in multiple dimensions on multiple processors had not been convincingly demonstrated. We found that a combination of the difference formulation and the inherent structure of Cosmos++ makes such an implementation both accurate and straightforward. We developed a "nearly nearest neighbor physics" technique to allow each processor to work independently, even with a fully implicit code. This technique coupled with the increased accuracy of an implicit Monte Carlo solution and the efficiency of parallel computing systems allows us to demonstrate the possibility of massively parallel thermal transport. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48
Radiative interactions in laminar duct flows
NASA Technical Reports Server (NTRS)
Trivedi, P. A.; Tiwari, S. N.
1990-01-01
Analyses and numerical procedures are presented for infrared radiative energy transfer in gases when other modes of energy transfer occur simultaneously. Two types of geometries are considered, a parallel plate duct and a circular duct. Fully developed laminar incompressible flows of absorbing-emitting species in black surfaced ducts are considered under the conditions of uniform wall heat flux. The participating species considered are OH, CO, CO2, and H2O. Nongray as well as gray formulations are developed for both geometries. Appropriate limiting solutions of the governing equations are obtained and conduction-radiation interaction parameters are evaluated. Tien and Lowder's wide band model correlation was used in nongray formulation. Numerical procedures are presented to solve the integro-differential equations for both geometries. The range of physical variables considered are 300 to 2000 K for temperature, 0.1 to 100.0 atm for pressure, and 0.1 to 100 cm spacings between plates/radius of the tube. An extensive parametric study based on nongray formulation is presented. Results obtained for different flow conditions indicate that the radiative interactions can be quite significant in fully developed incompressible flows.
Turbulence Model Predictions of Strongly Curved Flow in a U-Duct
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Gatski, Thomas B.; Morrison, Joseph H.
2000-01-01
The ability of three types of turbulence models to accurately predict the effects of curvature on the flow in a U-duct is studied. An explicit algebraic stress model performs slightly better than one- or two-equation linear eddy viscosity models, although it is necessary to fully account for the variation of the production-to-dissipation-rate ratio in the algebraic stress model formulation. In their original formulations, none of these turbulence models fully captures the suppressed turbulence near the convex wall, whereas a full Reynolds stress model does. Some of the underlying assumptions used in the development of algebraic stress models are investigated and compared with the computed flowfield from the full Reynolds stress model. Through this analysis, the assumption of Reynolds stress anisotropy equilibrium used in the algebraic stress model formulation is found to be incorrect in regions of strong curvature. By the accounting for the local variation of the principal axes of the strain rate tensor, the explicit algebraic stress model correctly predicts the suppressed turbulence in the outer part of the boundary layer near the convex wall.
I Think (That) Something's Missing: Complementizer Deletion in Nonnative E-Mails
ERIC Educational Resources Information Center
Durham, Mercedes
2011-01-01
Sociolinguistic competence is not often examined in nonnative English acquisition. This is particularly true for features where the variants are neither stylistically nor socially constrained, but rather are acceptable in all circumstances. Learning to use a language fully, however, implies being able to deal with this type of…
Optimum oil production planning using infeasibility driven evolutionary algorithm.
Singh, Hemant Kumar; Ray, Tapabrata; Sarker, Ruhul
2013-01-01
In this paper, we discuss a practical oil production planning optimization problem. For oil wells with insufficient reservoir pressure, gas is usually injected to artificially lift oil, a practice commonly referred to as enhanced oil recovery (EOR). The total gas that can be used for oil extraction is constrained by daily availability limits. The oil extracted from each well is known to be a nonlinear function of the gas injected into the well and varies between wells. The problem is to identify the optimal amount of gas that needs to be injected into each well to maximize the amount of oil extracted subject to the constraint on the total daily gas availability. The problem has long been of practical interest to all major oil exploration companies as it has the potential to derive large financial benefit. In this paper, an infeasibility driven evolutionary algorithm is used to solve a 56 well reservoir problem which demonstrates its efficiency in solving constrained optimization problems. Furthermore, a multi-objective formulation of the problem is posed and solved using a number of algorithms, which eliminates the need for solving the (single objective) problem on a regular basis. Lastly, a modified single objective formulation of the problem is also proposed, which aims to maximize the profit instead of the quantity of oil. It is shown that even with a lesser amount of oil extracted, more economic benefits can be achieved through the modified formulation.
Constrained multibody system dynamics: An automated approach
NASA Technical Reports Server (NTRS)
Kamman, J. W.; Huston, R. L.
1982-01-01
The governing equations for constrained multibody systems are formulated in a manner suitable for their automated, numerical development and solution. The closed loop problem of multibody chain systems is addressed. The governing equations are developed by modifying dynamical equations obtained from Lagrange's form of d'Alembert's principle. The modifications is based upon a solution of the constraint equations obtained through a zero eigenvalues theorem, is a contraction of the dynamical equations. For a system with n-generalized coordinates and m-constraint equations, the coefficients in the constraint equations may be viewed as constraint vectors in n-dimensional space. In this setting the system itself is free to move in the n-m directions which are orthogonal to the constraint vectors.
Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation
NASA Astrophysics Data System (ADS)
Du, Jiaoman; Yu, Lean; Li, Xiang
2016-04-01
Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.
Effect of an eigenstrain on slow viscous flow of compressible fluid films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, P.E.
We present a general formulation of the mechanics of slow viscous flow of slightly compressible fluid films in the presence of an eigenstrain. An eigenstrain represents a constrained volume change due to temperature, concentration of a dissolved species, or a chemical transformation. A silicon dioxide film grown on a silicon surface is an example of a viscous fluid film that is affected by a constrained volume change. We obtain a general expression for pressure in a fluid film produced by a surface chemical reaction accompanied by a volume change. This result is used to study the effect of an eigenstrainmore » on viscous stress relaxation in fluid films.« less
A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.
Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa
2018-02-01
Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.
Heat transfer in damaged material
NASA Astrophysics Data System (ADS)
Kruis, J.
2013-10-01
Fully coupled thermo-mechanical analysis of civil engineering problems is studied. The mechanical analysis is based on damage mechanics which is useful for modeling of behaviour of quasi-brittle materials, especially in tension. The damage is assumed to be isotropic. The heat transfer is assumed in the form of heat conduction governed by the Fourier law and heat radiation governed by the Stefan-Boltzmann law. Fully coupled thermo-mechanical problem is formulated.
Integration of Nanofluids into Commercial Antifreeze Concentrates with ASTM D15 Corrosion Testing
2013-05-01
are also proprietary. Blending and Milling A Fisher Scientific Model 550 Sonic Disembrator was used in making nano dispersions. A horizontal 2L...Commercial Antifreeze Zerex/Water Three Zerex antifreeze concentrates were chosen: Zerex G-05: Phosphate free, long life hybrid formulation, mostly used ...for passenger cars. Zerex 618: Fully formulated with organic acid, mostly used for heavy duty diesel engines. Zerex Dex-Cool: Phosphate and silicate
National Institute of Standards and Technology Data Gateway
SRD 103a NIST ThermoData Engine Database (PC database for purchase) ThermoData Engine is the first product fully implementing all major principles of the concept of dynamic data evaluation formulated at NIST/TRC.
Submerged flow bridge scour under clear water conditions
DOT National Transportation Integrated Search
2012-09-01
Prediction of pressure flow (vertical contraction) scour underneath a partially or fully submerged bridge superstructure : in an extreme flood event is crucial for bridge safety. An experimentally and numerically calibrated formulation is : developed...
NASA Technical Reports Server (NTRS)
Collins, Oliver (Inventor); Dolinar, Jr., Samuel J. (Inventor); Hus, In-Shek (Inventor); Bozzola, Fabrizio P. (Inventor); Olson, Erlend M. (Inventor); Statman, Joseph I. (Inventor); Zimmerman, George A. (Inventor)
1991-01-01
A method of formulating and packaging decision-making elements into a long constraint length Viterbi decoder which involves formulating the decision-making processors as individual Viterbi butterfly processors that are interconnected in a deBruijn graph configuration. A fully distributed architecture, which achieves high decoding speeds, is made feasible by novel wiring and partitioning of the state diagram. This partitioning defines universal modules, which can be used to build any size decoder, such that a large number of wires is contained inside each module, and a small number of wires is needed to connect modules. The total system is modular and hierarchical, and it implements a large proportion of the required wiring internally within modules and may include some external wiring to fully complete the deBruijn graph. pg,14.
Some aspects of multicomponent excess free energy models with subregular binaries
NASA Astrophysics Data System (ADS)
Cheng, Weiji; Ganguly, Jibamitra
1994-09-01
We have shown that two of the most commonly used multicomponent formulations of excess Gibbs free energy of mixing, those by WOHL (1946, 1953) and REDLICH and KISTER (1948), are formally equivalent if the binaries are constrained to have subregular properties, and also that other subregular multicomponent formulations developed in the mineralogical and geochemical literature are equivalent to, or higher order extensions of, these formulations. We have also presented a compact derivation of a multicomponent subregular solution leading to the same expression as derived by HELFFRICH and WOOD (1989). It is shown that Wohl's multicomponent formulation involves combination of binary excess free energies, which are calculated at compositions obtained by normal projection of the multicomponent composition onto the bounding binary joins, and is, thus, equivalent to the formulation developed by MUGGIANU et al. (1975). Finally, following the lead of HILLERT (1980), we have explored the limiting behavior of regular and subregular ternary solutions when a pair of components become energetically equivalent, and have, thus, derived an expression for calculating the ternary interaction parameter in a ternary solution from a knowledge of the properties of the bounding binaries, when one of these binaries is nearly ideal.
NASA Astrophysics Data System (ADS)
Huyakorn, P. S.; Panday, S.; Wu, Y. S.
1994-06-01
A three-dimensional, three-phase numerical model is presented for stimulating the movement on non-aqueous-phase liquids (NAPL's) through porous and fractured media. The model is designed for practical application to a wide variety of contamination and remediation scenarios involving light or dense NAPL's in heterogeneous subsurface systems. The model formulation is first derived for three-phase flow of water, NAPL and air (or vapor) in porous media. The formulation is then extended to handle fractured systems using the dual-porosity and discrete-fracture modeling approaches The model accommodates a wide variety of boundary conditions, including withdrawal and injection well conditions which are treated rigorously using fully implicit schemes. The three-phase of formulation collapses to its simpler forms when air-phase dynamics are neglected, capillary effects are neglected, or two-phase-air-liquid, liquid-liquid systems with one or two active phases are considered. A Galerkin procedure with upstream weighting of fluid mobilities, storage matrix lumping, and fully implicit treatment of nonlinear coefficients and well conditions is used. A variety of nodal connectivity schemes leading to finite-difference, finite-element and hybrid spatial approximations in three dimensions are incorporated in the formulation. Selection of primary variables and evaluation of the terms of the Jacobian matrix for the Newton-Raphson linearized equations is discussed. The various nodal lattice options, and their significance to the computational time and memory requirements with regards to the block-Orthomin solution scheme are noted. Aggressive time-stepping schemes and under-relaxation formulas implemented in the code further alleviate the computational burden.
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
Constrained Low-Rank Learning Using Least Squares-Based Regularization.
Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong
2017-12-01
Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.
Hamiltonian dynamics of thermostated systems: two-temperature heat-conducting phi4 chains.
Hoover, Wm G; Hoover, Carol G
2007-04-28
We consider and compare four Hamiltonian formulations of thermostated mechanics, three of them kinetic, and the other one configurational. Though all four approaches "work" at equilibrium, their application to many-body nonequilibrium simulations can fail to provide a proper flow of heat. All the Hamiltonian formulations considered here are applied to the same prototypical two-temperature "phi4" model of a heat-conducting chain. This model incorporates nearest-neighbor Hooke's-Law interactions plus a quartic tethering potential. Physically correct results, obtained with the isokinetic Gaussian and Nose-Hoover thermostats, are compared with two other Hamiltonian results. The latter results, based on constrained Hamiltonian thermostats, fail to model correctly the flow of heat.
Vibration of a spatial elastica constrained inside a straight tube
NASA Astrophysics Data System (ADS)
Chen, Jen-San; Fang, Joyce
2014-04-01
In this paper we study the dynamic behavior of a clamped-clamped spatial elastica under edge thrust constrained inside a straight cylindrical tube. Attention is focused on the calculation of the natural frequencies and mode shapes of the planar and spatial one-point-contact deformations. The main issue in determining the natural frequencies of a constrained rod is the movement of the contact point during vibration. In order to capture the physical essence of the contact-point movement, an Eulerian description of the equations of motion based on director theory is formulated. After proper linearization of the equations of motion, boundary conditions, and contact conditions, the natural frequencies and mode shapes of the elastica can be obtained by solving a system of eighteen first-order differential equations with shooting method. It is concluded that the planar one-point-contact deformation becomes unstable and evolves to a spatial deformation at a bifurcation point in both displacement and force control procedures.
Role of slack variables in quasi-Newton methods for constrained optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tapia, R.A.
In constrained optimization the technique of converting an inequality constraint into an equality constraint by the addition of a squared slack variable is well known but rarely used. In choosing an active constraint philosophy over the slack variable approach, researchers quickly justify their choice with the standard criticisms: the slack variable approach increases the dimension of the problem, is numerically unstable, and gives rise to singular systems. It is shown that these criticisms of the slack variable approach need not apply and the two seemingly distinct approaches are actually very closely related. In fact, the squared slack variable formulation canmore » be used to develop a superior and more comprehensive active constraint philosophy.« less
HIPPO Unit Commitment Version 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-01-17
Developed for the Midcontinent Independent System Operator, Inc. (MISO), HIPPO-Unit Commitment Version 1 is for solving security constrained unit commitment problem. The model was developed to solve MISO's cases. This version of codes includes I/O module to read in MISO's csv files, modules to create a state-based mixed integer programming formulation for solving MIP, and modules to test basic procedures to solve MIP via HPC.
Polyfibroblast: A Self-Healing and Galvanic Protection Additive
2011-07-25
3 Key Accomplishments 3.1 Silane Formulation Processability Silane coupling agents may be added to the existing microcapsules either in low...constrained by the need to form stable microcapsules . To this end, we explored a number of recipes in which the following silane coupling agents were...Isocyanatopropyltrimethoxy silane (ITS). • Glycidoxypropyltrimethoxy silane (GPS) As expected, the lowest concentrations most readily formed stable microcapsules . The
Developing Probabilistic Safety Performance Margins for Unknown and Underappreciated Risks
NASA Technical Reports Server (NTRS)
Benjamin, Allan; Dezfuli, Homayoon; Everett, Chris
2015-01-01
Probabilistic safety requirements currently formulated or proposed for space systems, nuclear reactor systems, nuclear weapon systems, and other types of systems that have a low-probability potential for high-consequence accidents depend on showing that the probability of such accidents is below a specified safety threshold or goal. Verification of compliance depends heavily upon synthetic modeling techniques such as PRA. To determine whether or not a system meets its probabilistic requirements, it is necessary to consider whether there are significant risks that are not fully considered in the PRA either because they are not known at the time or because their importance is not fully understood. The ultimate objective is to establish a reasonable margin to account for the difference between known risks and actual risks in attempting to validate compliance with a probabilistic safety threshold or goal. In this paper, we examine data accumulated over the past 60 years from the space program, from nuclear reactor experience, from aircraft systems, and from human reliability experience to formulate guidelines for estimating probabilistic margins to account for risks that are initially unknown or underappreciated. The formulation includes a review of the safety literature to identify the principal causes of such risks.
Present-day kinematics of the Danakil block (southern Red Sea-Afar) constrained by GPS
NASA Astrophysics Data System (ADS)
Ladron de Guevara, R.; Jonsson, S.; Ruch, J.; Doubre, C.; Reilinger, R. E.; Ogubazghi, G.; Floyd, M.; Vasyura-Bathke, H.
2017-12-01
The rifting of the Arabian plate from the Nubian and Somalian plates is primarily accommodated by seismic and magmatic activity along two rift arms of the Afar triple junction (the Red Sea and Gulf of Aden rifts). The spatial distribution of active deformation in the Afar region have been constrained with geodetic observations. However, the plate boundary configuration in which this deformation occurs is still not fully understood. South of 17°N, the Red Sea rift is composed of two parallel and overlapping rift branches separated by the Danakil block. The distribution of the extension across these two overlapping rifts, their potential connection through a transform fault zone and the counterclockwise rotation of the Danakil block have not yet been fully resolved. Here we analyze new GPS observations from the Danakil block, the Gulf of Zula area (Eritrea) and Afar (Ethiopia) together with previous geodetic survey data to better constrain the plate kinematics and active deformation of the region. The new data has been collected in 2016 and add up to 5 years to the existing geodetic observations (going back to 2000). Our improved GPS velocity field shows differences with previously modeled GPS velocities, suggesting that the rate and rotation of the Danakil block need to be updated. The new velocity field also shows that the plate-boundary strain is accommodated by broad deformation zones rather than across sharp boundaries between tectonic blocks. To better determine the spatial distribution of the strain, we first implement a rigid block model to constrain the overall regional plate kinematics and to isolate the plate-boundary deformation at the western boundary of the Danakil block. We then study whether the recent southern Red Sea rifting events have caused detectable changes in observed GPS velocities and if the observations can be used to constrain the scale of this offshore rift activity. Finally, we investigate different geometries of transform faults that might connect the two overlapping branches of the southern Red Sea rift in the Gulf of Zula region.
A fully implicit Hall MHD algorithm based on the ion Ohm's law
NASA Astrophysics Data System (ADS)
Chacón, Luis
2010-11-01
Hall MHD is characterized by extreme hyperbolic numerical stiffness stemming from fast dispersive waves. Implicit algorithms are potentially advantageous, but of very difficult efficient implementation due to the condition numbers of associated matrices. Here, we explore the extension of a successful fully implicit, fully nonlinear algorithm for resistive MHD,ootnotetextL. Chac'on, Phys. Plasmas, 15 (2008) based on Jacobian-free Newton-Krylov methods with physics-based preconditioning, to Hall MHD. Traditionally, Hall MHD has been formulated using the electron equation of motion (EOM) to determine the electric field in the plasma (the so-called Ohm's law). However, given that the center-of-mass EOM, the ion EOM, and the electron EOM are linearly dependent, one could equivalently employ the ion EOM as the Ohm's law for a Hall MHD formulation. While, from a physical standpoint, there is no a priori advantage for using one Ohm's law vs. the other, we argue in this poster that there is an algorithmic one. We will show that, while the electron Ohm's law prevents the extension of the resistive MHD preconditioning strategy to Hall MHD, an ion Ohm's law allows it trivially. Verification and performance numerical results on relevant problems will be presented.
ERIC Educational Resources Information Center
Tay, Louis; Vermunt, Jeroen K.; Wang, Chun
2013-01-01
We evaluate the item response theory with covariates (IRT-C) procedure for assessing differential item functioning (DIF) without preknowledge of anchor items (Tay, Newman, & Vermunt, 2011). This procedure begins with a fully constrained baseline model, and candidate items are tested for uniform and/or nonuniform DIF using the Wald statistic.…
Real-time optimal guidance for orbital maneuvering.
NASA Technical Reports Server (NTRS)
Cohen, A. O.; Brown, K. R.
1973-01-01
A new formulation for soft-constraint trajectory optimization is presented as a real-time optimal feedback guidance method for multiburn orbital maneuvers. Control is always chosen to minimize burn time plus a quadratic penalty for end condition errors, weighted so that early in the mission (when controllability is greatest) terminal errors are held negligible. Eventually, as controllability diminishes, the method partially relaxes but effectively still compensates perturbations in whatever subspace remains controllable. Although the soft-constraint concept is well-known in optimal control, the present formulation is novel in addressing the loss of controllability inherent in multiple burn orbital maneuvers. Moreover the necessary conditions usually obtained from a Bolza formulation are modified in this case so that the fully hard constraint formulation is a numerically well behaved subcase. As a result convergence properties have been greatly improved.
Structural optimization for joined-wing synthesis
NASA Technical Reports Server (NTRS)
Gallman, John W.; Kroo, Ilan M.
1992-01-01
The differences between fully stressed and minimum-weight joined-wing structures are identified, and these differences are quantified in terms of weight, stress, and direct operating cost. A numerical optimization method and a fully stressed design method are used to design joined-wing structures. Both methods determine the sizes of 204 structural members, satisfying 1020 stress constraints and five buckling constraints. Monotonic splines are shown to be a very effective way of linking spanwise distributions of material to a few design variables. Both linear and nonlinear analyses are employed to formulate the buckling constraints. With a constraint on buckling, the fully stressed design is shown to be very similar to the minimum-weight structure. It is suggested that a fully stressed design method based on nonlinear analysis is adequate for an aircraft optimization study.
A Coalitional Game for Distributed Inference in Sensor Networks With Dependent Observations
NASA Astrophysics Data System (ADS)
He, Hao; Varshney, Pramod K.
2016-04-01
We consider the problem of collaborative inference in a sensor network with heterogeneous and statistically dependent sensor observations. Each sensor aims to maximize its inference performance by forming a coalition with other sensors and sharing information within the coalition. It is proved that the inference performance is a nondecreasing function of the coalition size. However, in an energy constrained network, the energy consumption of inter-sensor communication also increases with increasing coalition size, which discourages the formation of the grand coalition (the set of all sensors). In this paper, the formation of non-overlapping coalitions with statistically dependent sensors is investigated under a specific communication constraint. We apply a game theoretical approach to fully explore and utilize the information contained in the spatial dependence among sensors to maximize individual sensor performance. Before formulating the distributed inference problem as a coalition formation game, we first quantify the gain and loss in forming a coalition by introducing the concepts of diversity gain and redundancy loss for both estimation and detection problems. These definitions, enabled by the statistical theory of copulas, allow us to characterize the influence of statistical dependence among sensor observations on inference performance. An iterative algorithm based on merge-and-split operations is proposed for the solution and the stability of the proposed algorithm is analyzed. Numerical results are provided to demonstrate the superiority of our proposed game theoretical approach.
Silymarin nanoparticle prevents paracetamol-induced hepatotoxicity
Das, Suvadra; Roy, Partha; Auddy, Runa Ghosh; Mukherjee, Arup
2011-01-01
Silymarin (Sm) is a polyphenolic component extracted from Silybum marianum. It is an antioxidant, traditionally used as an immunostimulant, hepatoprotectant, and dietary supplement. Relatively recently, Sm has proved to be a valuable chemopreventive and a useful antineoplastic agent. Medical success for Sm is, however, constrained by very low aqueous solubility and associated biopharmaceutical limitations. Sm flavonolignans are also susceptible to ion-catalyzed degradation in the gut. Proven antihepatotoxic activity of Sm cannot therefore be fully exploited in acute chemical poisoning conditions like that in paracetamol overdose. Moreover, a synchronous delivery that is required for hepatic regeneration is difficult to achieve by itself. This work is meant to circumvent the inherent limitations of Sm through the use of nanotechnology. Sm nanoparticles (Smnps) were prepared by nanoprecipitation in polyvinyl alcohol stabilized Eudragit RS100® polymer (Rohm Pharma GmbH, Darmstadt, Germany). Process parameter optimization provided 67.39% entrapment efficiency and a Gaussian particle distribution of average size 120.37 nm. Sm release from the nanoparticles was considerably sustained for all formulations. Smnps were strongly protective against hepatic damage when tested in a paracetamol overdose hepatotoxicity model. Nanoparticles recorded no animal death even when administered after an established paracetamol-induced hepatic necrosis. Preventing progress of paracetamol hepatic damage was traced for an efficient glutathione regeneration to a level of 11.3 μmol/g in hepatic tissue due to Smnps. PMID:21753880
Guidance strategies and analysis for low thrust navigation
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1973-01-01
A low-thrust guidance algorithm suitable for operational use was formulated. A constrained linear feedback control law was obtained using a minimum terminal miss criterion and restricting control corrections to constant changes for specified time periods. Both fixed- and variable-time-of-arrival guidance were considered. The performance of the guidance law was evaluated by applying it to the approach phase of the 1980 rendezvous mission with the comet Encke.
NASA Astrophysics Data System (ADS)
Amengonu, Yawo H.; Kakad, Yogendra P.
2014-07-01
Quasivelocity techniques were applied to derive the dynamics of a Differential Wheeled Mobile Robot (DWMR) in the companion paper. The present paper formulates a control system design for trajectory tracking of this class of robots. The method develops a feedback linearization technique for the nonlinear system using dynamic extension algorithm. The effectiveness of the nonlinear controller is illustrated with simulation example.
A constrained robust least squares approach for contaminant release history identification
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.
2006-04-01
Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.
Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
NASA Astrophysics Data System (ADS)
Ballantyne, F.; Billings, S. A.
2016-12-01
Much of the variability in projections of Earth's future C balance derives from uncertainty in how to formulate and parameterize models of biologically mediated transformations of soil organic C (SOC). Over the past decade, models of belowground decomposition have incorporated more realism, namely microbial biomass and exoenzyme pools, but it remains unclear whether microbially mediated decomposition is accurately formulated. Different models and different assumptions about how microbial efficiency, defined in terms of respiratory losses, varies with temperature exert great influence on SOC and CO2 flux projections for the future. Here, we incorporate a physiologically realistic formulation of CO2 loss from microbes, distinct from extant formulations and logically consistent with microbial C uptake and losses, into belowground dynamics and contrast its projections for SOC pools and CO2 flux from soils to those from the phenomenological formulations of efficiency in current models. We quantitatively describe how short and long term SOC dynamics are influenced by different mathematical formulations of efficiency, and that our lack of knowledge regarding loss rates from SOC and microbial biomass pools, specific respiration rate and maximum substrate uptake rate severely constrains our ability to confidently parameterize microbial SOC modules in Earth System Models. Both steady-state SOC and microbial biomass C pools, as well as transient responses to perturbations, can differ substantially depending on how microbial efficiency is derived. In particular, the discrepancy between SOC stocks for different formulations of efficiency varies from negligible to more than two orders of magnitude, depending on the relative values of respiratory versus non-respiratory losses from microbial biomass. Mass-specific respiration and proportional loss rates from soil microbes emerge as key determinants of the consequences of different formulations of efficiency for C flux in soils.
Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering
NASA Technical Reports Server (NTRS)
Tilton, James C.
2002-01-01
This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.
A Social Semiotic Mapping of Voice in Youth Media: The Pitch in Youth Video Production
ERIC Educational Resources Information Center
Pyles, Damiana Gibbons
2017-01-01
An ethics of youth media production is the interplay of identities, media literacy, and modality that shape the environment within which young people produce media, yet how "voice" is fostered and/or constrained in these environments could still be explored more fully. This paper stems from a larger qualitative study of how youth created…
Sardella, Roccaldo; Ianni, Federica; Lisanti, Antonella; Scorzoni, Stefania; Marini, Francesca; Sternativo, Silvia; Natalini, Benedetto
2014-05-01
To the best of our knowledge enantioselective chromatographic protocols on β-amino acids with polysaccharide-based chiral stationary phases (CSPs) have not yet appeared in the literature. Therefore, the primary objective of this work was the development of chromatographic methods based on the use of an amylose derivative CSP (Lux Amylose-2), enabling the direct normal-phase (NP) enantioresolution of four fully constrained β-amino acids. Also, the results obtained with the glycopeptide-type Chirobiotic T column employed in the usual polar-ionic (PI) mode of elution are compared with those achieved with the polysaccharide-based phase. The Lux Amylose-2 column, in combination with alkyl sulfonic acid containing NP eluent systems, prevailed over the Chirobiotic T one, when used under the PI mode of elution, and hence can be considered as the elective choice for the enantioseparation of this class of rigid β-amino acids. Moreover, the extraordinarily high α (up to 4.60) and R S (up to 10.60) values provided by the polysaccharidic polymer, especially when used with camphor sulfonic acid containing eluent systems, make it also suitable for preparative-scale enantioisolations.
Determining Size Distribution at the Phoenix Landing Site
NASA Astrophysics Data System (ADS)
Mason, E. L.; Lemmon, M. T.
2016-12-01
Dust aerosols play a crucial role in determining atmospheric radiative heating on Mars through absorption and scattering of sunlight. How dust scatters and absorbs light is dependent on size, shape, composition, and quantity. Optical properties of the dust have been well constrained in the visible and near infrared wavelengths using various methods [Wolff et al. 2009, Lemmon et al. 2004]. In addition, the dust is nonspherical, and irregular shapes have shown to work well in determining effective particle size [Pollack et al. 1977]. Variance of the size distribution is less constrained but constitutes an important parameter in fully describing the dust. The Phoenix Lander's Surface Stereo Imager performed several cross-sky brightness surveys to determine the size distribution and scattering properties of dust in the wavelength range of 400 to 1000 nm. In combination with a single-layer radiative transfer model, these surveys can be used to help constrain variance of the size distribution. We will present a discussion of seasonal size distribution as it pertains to the Phoenix landing site.
Tests of gravity with future space-based experiments
NASA Astrophysics Data System (ADS)
Sakstein, Jeremy
2018-03-01
Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.
NASA Astrophysics Data System (ADS)
Amelang, Jeff
The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.
Hee, S.; Vázquez, J. A.; Handley, W. J.; ...
2016-12-01
Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-α data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance ΛCDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as ‘phantom dark energy’) is identified within the 1.5σ confidence intervals of the posterior distribution. In order to identify themore » power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback–Leibler divergence. Moreover, this formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-α datasets. Furthermore, SNIa and BAO constrain most strongly around redshift range 0.1 - 0.5, whilst the Lyman-α data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hee, S.; Vázquez, J. A.; Handley, W. J.
Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-α data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance ΛCDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as ‘phantom dark energy’) is identified within the 1.5σ confidence intervals of the posterior distribution. In order to identify themore » power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback–Leibler divergence. Moreover, this formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-α datasets. Furthermore, SNIa and BAO constrain most strongly around redshift range 0.1 - 0.5, whilst the Lyman-α data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.« less
Sadeghi, Mozhgan; Hemmati, Salar; Hamishehkar, Hamed
2016-01-01
Disintegrants are the key excipients administered in tablet formulations to boost the decomposition of the tablet into smaller pieces in the gastrointestinal environment, thereby increasing the available surface area and enhancing a more rapid release of the active ingredient. Polysuccinimide (PSI), a biodegradable polymer synthesized from aspartic acid, was reacted with starch and fully assessed by CHN, (1)H-NMR, and FTIR. PSI-grafted starch (PSI-St) was synthesized and applied as a disintegrant in the formulation of a rapidly disintegrating tablet of Ondansetron, a nausea and vomiting medicine. The tablet formulated with the newly developed superdisintegrant was evaluated for hardness, friability, disintegration time, and dissolution rate, and the results were compared with tablets formulated with an identical composition of test formulation differing only in type of disintegrant. Tablets prepared with starch and tablets prepared with sodium starch glycolate (SSG) were used as negative and positive controls, respectively. Dissolution study results indicated that although the onset of disintegration action was faster for SSG than PSI-St, higher amounts of drug were released from tablets formulated from PSI-St than from those formulated from SSG during 10 min. It was concluded that the novel synthesized superdisintegrant has an appropriate potential for the application in the formulation of fast dissolving tablets.
Sadeghi, Mozhgan; Hemmati, Salar; Hamishehkar, Hamed
2016-05-01
Disintegrants are the key excipients administered in tablet formulations to boost the decomposition of the tablet into smaller pieces in the gastrointestinal environment, thereby increasing the available surface area and enhancing a more rapid release of the active ingredient. Polysuccinimide (PSI), a biodegradable polymer synthesized from aspartic acid, was reacted with starch and fully assessed by CHN, 1 H-NMR, and FTIR. PSI-grafted starch (PSI-St) was synthesized and applied as a disintegrant in the formulation of a rapidly disintegrating tablet of Ondansetron, a nausea and vomiting medicine. The tablet formulated with the newly developed superdisintegrant was evaluated for hardness, friability, disintegration time, and dissolution rate, and the results were compared with tablets formulated with an identical composition of test formulation differing only in type of disintegrant. Tablets prepared with starch and tablets prepared with sodium starch glycolate (SSG) were used as negative and positive controls, respectively. Dissolution study results indicated that although the onset of disintegration action was faster for SSG than PSI-St, higher amounts of drug were released from tablets formulated from PSI-St than from those formulated from SSG during 10 min. It was concluded that the novel synthesized superdisintegrant has an appropriate potential for the application in the formulation of fast dissolving tablets.
Mixed Integer Programming and Heuristic Scheduling for Space Communication
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2013-01-01
Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.
2013-04-01
fat beef or pork and bakery products. Raw shrimp deteriorates due to lipid oxidation and protein denaturation which can be measured by total...by formulation; max 4.5% soy protein by formulation; less than 6g fat and less than 400mg sodium per 100 grams; beef shall resemble the size of UGR...SAUCE KIT, FULLY COOKED, FRZN, Boil-in-Bag, Meatballs, Beef and Pork, derived from ground beef and pork. Not more than 25% fat raw . Meatballs may
Synthesis of improved moisture resistant polymers
NASA Technical Reports Server (NTRS)
Orell, M. K.
1979-01-01
The use of difluoromaleimide-capped prepolymers to provide improved moisture resistant polymers was investigated. Six different prepolymer formulations were prepared by two different methods. One method utilized the PMR approach to polyimides and the second method employed the normal condensation route to provide fully imidized prepolymers. Polymer specimens cured at 450 F exhibited adequate long-term stability in air at 400 F. Moisture absorption studies were conducted on one polymer formulation. Neat Polymer specimens exhibited weight gains of up to 2% (w/w) after exposure to 100% relative humidity at 344K (160 F) for 400 hours.
Light-triggerable formulations for the intracellular controlled release of biomolecules.
Lino, Miguel M; Ferreira, Lino
2018-05-01
New therapies based on the use of biomolecules [e.g., proteins, peptides, and non-coding (nc)RNAs] have emerged during the past few years. Given their instability, adverse effects, and limited ability to cross cell membranes, delivery systems are required to fully reveal their biological potential. Sophisticated nanoformulations responsive to light offer an excellent opportunity for the controlled release of these biomolecules, enabling the control of timing, duration, location, and dosage. In this review, we discuss the design principles for the delivery of biomolecules, in particular proteins and RNA-based therapeutics, by light-triggerable formulations. We further discuss the opportunities offered by these formulations in terms of endosomal escape, as well as their limitations. Copyright © 2018 Elsevier Ltd. All rights reserved.
Low Reynolds number two-equation modeling of turbulent flows
NASA Technical Reports Server (NTRS)
Michelassi, V.; Shih, T.-H.
1991-01-01
A k-epsilon model that accounts for viscous and wall effects is presented. The proposed formulation does not contain the local wall distance thereby making very simple the application to complex geometries. The formulation is based on an existing k-epsilon model that proved to fit very well with the results of direct numerical simulation. The new form is compared with nine different two-equation models and with direct numerical simulation for a fully developed channel flow at Re = 3300. The simple flow configuration allows a comparison free from numerical inaccuracies. The computed results prove that few of the considered forms exhibit a satisfactory agreement with the channel flow data. The model shows an improvement with respect to the existing formulations.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
OPTIMASS: a package for the minimization of kinematic mass functions with constraints
NASA Astrophysics Data System (ADS)
Cho, Won Sang; Gainer, James S.; Kim, Doojin; Lim, Sung Hak; Matchev, Konstantin T.; Moortgat, Filip; Pape, Luc; Park, Myeonghun
2016-01-01
Reconstructed mass variables, such as M 2, M 2 C , M T * , and M T2 W , play an essential role in searches for new physics at hadron colliders. The calculation of these variables generally involves constrained minimization in a large parameter space, which is numerically challenging. We provide a C++ code, O ptimass, which interfaces with the M inuit library to perform this constrained minimization using the Augmented Lagrangian Method. The code can be applied to arbitrarily general event topologies, thus allowing the user to significantly extend the existing set of kinematic variables. We describe this code, explain its physics motivation, and demonstrate its use in the analysis of the fully leptonic decay of pair-produced top quarks using M 2 variables.
Stall Recovery Guidance Algorithms Based on Constrained Control Approaches
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Kaneshige, John; Acosta, Diana
2016-01-01
Aircraft loss-of-control, in particular approach to stall or fully developed stall, is a major factor contributing to aircraft safety risks, which emphasizes the need to develop algorithms that are capable of assisting the pilots to identify the problem and providing guidance to recover the aircraft. In this paper we present several stall recovery guidance algorithms, which are implemented in the background without interfering with flight control system and altering the pilot's actions. They are using input and state constrained control methods to generate guidance signals, which are provided to the pilot in the form of visual cues. It is the pilot's decision to follow these signals. The algorithms are validated in the pilot-in-the loop medium fidelity simulation experiment.
1986-12-01
paper, we consider geometrically exact models, such as the Kirchhoff-Love-Reissner- Antman model for rods and its counterpart for plates and shells. These...equivalent model, formulated as a constrained director theory - the so-called special theory of Cosserat rods - is due to Antman (1974] - see also...Anan and Jordan [1975], Anunan and Kenny [1981]. and Antman [1984] for some applications. The dynamic version along with the parametrization discussed
Time Evolution of the Giant Molecular Cloud Mass Functions across Galactic Disks
NASA Astrophysics Data System (ADS)
Kobayashi, Masato I. N.; Inutsuka, Shu-Ichiro; Kobayashi, Hiroshi; Hasegawa, Kenji
2017-01-01
We formulate and conduct the time-integration of time evolution equation for the giant molecular cloud mass function (GMCMF) including the cloud-cloud collision (CCC) effect. Our results show that the CCC effect is only limited in the massive-end of the GMCMF and indicate that future high resolution and sensitivity radio observations may constrain giant molecular cloud (GMC) timescales by observing the GMCMF slope in the lower mass regime.
Optimization of beam orientation in radiotherapy using planar geometry
NASA Astrophysics Data System (ADS)
Haas, O. C. L.; Burnham, K. J.; Mills, J. A.
1998-08-01
This paper proposes a new geometrical formulation of the coplanar beam orientation problem combined with a hybrid multiobjective genetic algorithm. The approach is demonstrated by optimizing the beam orientation in two dimensions, with the objectives being formulated using planar geometry. The traditional formulation of the objectives associated with the organs at risk has been modified to account for the use of complex dose delivery techniques such as beam intensity modulation. The new algorithm attempts to replicate the approach of a treatment planner whilst reducing the amount of computation required. Hybrid genetic search operators have been developed to improve the performance of the genetic algorithm by exploiting problem-specific features. The multiobjective genetic algorithm is formulated around the concept of Pareto optimality which enables the algorithm to search in parallel for different objectives. When the approach is applied without constraining the number of beams, the solution produces an indication of the minimum number of beams required. It is also possible to obtain non-dominated solutions for various numbers of beams, thereby giving the clinicians a choice in terms of the number of beams as well as in the orientation of these beams.
Numerical study of hydrogen-air supersonic combustion by using elliptic and parabolized equations
NASA Technical Reports Server (NTRS)
Chitsomboon, T.; Tiwari, S. N.
1986-01-01
The two-dimensional Navier-Stokes and species continuity equations are used to investigate supersonic chemically reacting flow problems which are related to scramjet-engine configurations. A global two-step finite-rate chemistry model is employed to represent the hydrogen-air combustion in the flow. An algebraic turbulent model is adopted for turbulent flow calculations. The explicit unsplit MacCormack finite-difference algorithm is used to develop a computer program suitable for a vector processing computer. The computer program developed is then used to integrate the system of the governing equations in time until convergence is attained. The chemistry source terms in the species continuity equations are evaluated implicitly to alleviate stiffness associated with fast chemical reactions. The problems solved by the elliptic code are re-investigated by using a set of two-dimensional parabolized Navier-Stokes and species equations. A linearized fully-coupled fully-implicit finite difference algorithm is used to develop a second computer code which solves the governing equations by marching in spce rather than time, resulting in a considerable saving in computer resources. Results obtained by using the parabolized formulation are compared with the results obtained by using the fully-elliptic equations. The comparisons indicate fairly good agreement of the results of the two formulations.
Establishing a theory for deuteron-induced surrogate reactions
NASA Astrophysics Data System (ADS)
Potel, G.; Nunes, F. M.; Thompson, I. J.
2015-09-01
Background: Deuteron-induced reactions serve as surrogates for neutron capture into compound states. Although these reactions are of great applicability, no theoretical efforts have been invested in this direction over the last decade. Purpose: The goal of this work is to establish on firm grounds a theory for deuteron-induced neutron-capture reactions. This includes formulating elastic and inelastic breakup in a consistent manner. Method: We describe this process both in post- and prior-form distorted wave Born approximation following previous works and discuss the differences in the formulation. While the convergence issues arising in the post formulation can be overcome in the prior formulation, in this case one still needs to take into account additional terms due to nonorthogonality. Results: We apply our method to the 93Nb(d ,p )X at Ed=15 and 25 MeV and are able to obtain a good description of the data. We look at the various partial wave contributions, as well as elastic versus inelastic contributions. We also connect our formulation with transfer to neutron bound states. Conclusions: Our calculations demonstrate that the nonorthogonality term arising in the prior formulation is significant and is at the heart of the long-standing controversy between the post and the prior formulations of the theory. We also show that the cross sections for these reactions are angular-momentum dependent and therefore the commonly used Weisskopf limit is inadequate. Finally, we make important predictions for the relative contributions of elastic breakup and nonelastic breakup and call for elastic-breakup measurements to further constrain our model.
Solomon, Keith R; Wilks, Martin F; Bachman, Ammie; Boobis, Alan; Moretto, Angelo; Pastoor, Timothy P; Phillips, Richard; Embry, Michelle R
2016-11-01
When the human health risk assessment/risk management paradigm was developed, it did not explicitly include a "problem formulation" phase. The concept of problem formulation was first introduced in the context of ecological risk assessment (ERA) for the pragmatic reason to constrain and focus ERAs on the key questions. However, this need also exists for human health risk assessment, particularly for cumulative risk assessment (CRA), because of its complexity. CRA encompasses the combined threats to health from exposure via all relevant routes to multiple stressors, including biological, chemical, physical and psychosocial stressors. As part of the HESI Risk Assessment in the 21st Century (RISK21) Project, a framework for CRA was developed in which problem formulation plays a critical role. The focus of this effort is primarily on a chemical CRA (i.e., two or more chemicals) with subsequent consideration of non-chemical stressors, defined as "modulating factors" (ModFs). Problem formulation is a systematic approach that identifies all factors critical to a specific risk assessment and considers the purpose of the assessment, scope and depth of the necessary analysis, analytical approach, available resources and outcomes, and overall risk management goal. There are numerous considerations that are specific to multiple stressors, and proper problem formulation can help to focus a CRA to the key factors in order to optimize resources. As part of the problem formulation, conceptual models for exposures and responses can be developed that address these factors, such as temporal relationships between stressors and consideration of the appropriate ModFs.
Establishing a theory for deuteron induced surrogate reactions
Potel, G.; Nunes, F. M.; Thompson, I. J.
2015-09-18
Background: Deuteron-induced reactions serve as surrogates for neutron capture into compound states. Although these reactions are of great applicability, no theoretical efforts have been invested in this direction over the last decade. Purpose: The goal of this work is to establish on firm grounds a theory for deuteron-induced neutron-capture reactions. This includes formulating elastic and inelastic breakup in a consistent manner. Method: We describe this process both in post- and prior-form distorted wave Born approximation following previous works and discuss the differences in the formulation. While the convergence issues arising in the post formulation can be overcome in the priormore » formulation, in this case one still needs to take into account additional terms due to nonorthogonality. Results: We apply our method to the Nb93(d,p)X at Ed=15 and 25 MeV and are able to obtain a good description of the data. We then look at the various partial wave contributions, as well as elastic versus inelastic contributions. We also connect our formulation with transfer to neutron bound states.Conclusions: Our calculations demonstrate that the nonorthogonality term arising in the prior formulation is significant and is at the heart of the long-standing controversy between the post and the prior formulations of the theory. We also show that the cross sections for these reactions are angular-momentum dependent and therefore the commonly used Weisskopf limit is inadequate. We finally make important predictions for the relative contributions of elastic breakup and nonelastic breakup and call for elastic-breakup measurements to further constrain our model.« less
Smart approaches to glucose-responsive drug delivery.
Webber, Matthew J; Anderson, Daniel G
2015-01-01
A grand challenge in the field of "smart" drug delivery has been the quest to create formulations that can sense glucose and respond by delivering an appropriate dose of insulin. This approach, referred to as the "fully synthetic pancreas", envisions closed-loop insulin therapy. The strategies for incorporating glucose sensing into formulations can be broadly categorized into three subsets: enzymatic sensing, natural glucose-binding proteins and synthetic molecular recognition. Here, we highlight some examples of each of these approaches. The challenges remaining en route to the realization of closed-loop insulin therapy are substantial, and include improved response time, more authentic fidelity in glycemic control, improved biocompatibility for delivery materials and assurance of both safety and efficacy. The ubiquitous existence of glucose, combined with the unstable and toxic properties of insulin, further compound efforts towards the generation of a fully synthetic pancreas. However, given the growing incidence of both type-1 and type-2 diabetes, there is significant potential impact from the realization of such an approach on improving therapeutic management of the disease.
De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas
2015-03-01
Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.
A Bayesian Formulation of Behavioral Control
ERIC Educational Resources Information Center
Huys, Quentin J. M.; Dayan, Peter
2009-01-01
Helplessness, a belief that the world is not subject to behavioral control, has long been central to our understanding of depression, and has influenced cognitive theories, animal models and behavioral treatments. However, despite its importance, there is no fully accepted definition of helplessness or behavioral control in psychology or…
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
Chen, Jianhui; Liu, Ji; Ye, Jieping
2013-01-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.
Chen, Jianhui; Liu, Ji; Ye, Jieping
2012-02-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.
Stability analysis in tachyonic potential chameleon cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farajollahi, H.; Salehi, A.; Tayebi, F.
2011-05-01
We study general properties of attractors for tachyonic potential chameleon scalar-field model which possess cosmological scaling solutions. An analytic formulation is given to obtain fixed points with a discussion on their stability. The model predicts a dynamical equation of state parameter with phantom crossing behavior for an accelerating universe. We constrain the parameters of the model by best fitting with the recent data-sets from supernovae and simulated data points for redshift drift experiment generated by Monte Carlo simulations.
Quadratic Optimization in the Problems of Active Control of Sound
NASA Technical Reports Server (NTRS)
Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).
The equations of motion for moist atmospheric air
NASA Astrophysics Data System (ADS)
Makarieva, Anastassia M.; Gorshkov, Victor G.; Nefiodov, Andrei V.; Sheil, Douglas; Nobre, Antonio Donato; Bunyard, Peter; Nobre, Paulo; Li, Bai-Lian
2017-07-01
How phase transitions affect the motion of moist atmospheric air remains controversial. In the early 2000s two distinct differential equations of motion were proposed. Besides their contrasting formulations for the acceleration of condensate, the equations differ concerning the presence/absence of a term equal to the rate of phase transitions multiplied by the difference in velocity between condensate and air. This term was interpreted in the literature as the "reactive motion" associated with condensation. The reasoning behind this reactive motion was that when water vapor condenses and droplets begin to fall the remaining gas must move upward to conserve momentum. Here we show that the two contrasting formulations imply distinct assumptions about how gaseous air and condensate particles interact. We show that these assumptions cannot be simultaneously applicable to condensation and evaporation. Reactive motion leading to an upward acceleration of air during condensation does not exist. The reactive motion term can be justified for evaporation only; it describes the downward acceleration of air. We emphasize the difference between the equations of motion (i.e., equations constraining velocity) and those constraining momentum (i.e., equations of motion and continuity combined). We show that owing to the imprecise nature of the continuity equations, consideration of total momentum can be misleading and that this led to the reactive motion controversy. Finally, we provide a revised and generally applicable equation for the motion of moist air.
Fielding-Miller, Rebecca; Dunkle, Kristin
2018-01-01
Women who engage in transactional sex are more likely to experience intimate partner violence (IPV) and are at higher risk of HIV. However, women engage in transactional sex for a variety of reasons and the precise mechanism linking transactional sex and IPV is not fully understood. We conducted a behavioural survey with a cross-sectional sample of 401 women attending 1 rural and 1 urban public antenatal clinic in Swaziland between February and June 2014. We used structural equation modelling to identify and measure constrained relationship agency (CRA) as a latent variable, and then tested the hypothesis that CRA plays a significant role in the pathway between IPV and transactional sex. After controlling for CRA, receiving more material goods from a sexual partner was not associated with higher levels of physical or sexual IPV and was protective against emotional IPV. CRA was the single largest predictor of IPV, and more education was associated with decreased levels of constrained relationship agency. Policies and interventions that target transactional sex as a driver of IPV and HIV may be more successful if they instead target the broader social landscape that constrains women’s agency and drives the harmful aspects of transactional sex. PMID:29132281
NASA Astrophysics Data System (ADS)
Hee, S.; Vázquez, J. A.; Handley, W. J.; Hobson, M. P.; Lasenby, A. N.
2017-04-01
Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era cosmic microwave background, baryonic acoustic oscillations (BAO), Type Ia supernova (SNIa) and Lyman α (Lyα) data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance Λ cold dark matter (ΛCDM) model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other, a supernegative equation of state (also known as 'phantom dark energy') is identified within the 1.5σ confidence intervals of the posterior distribution. To identify the power of different data sets in constraining the dark energy equation of state, we use a novel formulation of the Kullback-Leibler divergence. This formalism quantifies the information the data add when moving from priors to posteriors for each possible data set combination. The SNIa and BAO data sets are shown to provide much more constraining power in comparison to the Lyα data sets. Further, SNIa and BAO constrain most strongly around redshift range 0.1-0.5, whilst the Lyα data constrain weakly over a broader range. We do not attribute the supernegative favouring to any particular data set, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.
Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.
Böing-Messing, Florian; Mulder, Joris
2018-05-03
In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.
A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions
NASA Astrophysics Data System (ADS)
Lienert, Sebastian; Joos, Fortunat
2018-05-01
A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.
The method of projected characteristics for the evolution of magnetic arches
NASA Technical Reports Server (NTRS)
Nakagawa, Y.; Hu, Y. Q.; Wu, S. T.
1987-01-01
A numerical method of solving fully nonlinear MHD equation is described. In particular, the formulation based on the newly developed method of projected characteristics (Nakagawa, 1981) suitable to study the evolution of magnetic arches due to motions of their foot-points is presented. The final formulation is given in the form of difference equations; therefore, the analysis of numerical stability is also presented. Further, the most important derivation of physically self-consistent, time-dependent boundary conditions (i.e. the evolving boundary equations) is given in detail, and some results obtained with such boundary equations are reported.
Perspectives of construction robots
NASA Astrophysics Data System (ADS)
Stepanov, M. A.; Gridchin, A. M.
2018-03-01
This article is an overview of construction robots features, based on formulating the list of requirements for different types of construction robots in relation to different types of construction works.. It describes a variety of construction works and ways to construct new or to adapt existing robot designs for a construction process. Also, it shows the prospects of AI-controlled machines, implementation of automated control systems and networks on construction sites. In the end, different ways to develop and improve, including ecological aspect, the construction process through the wide robotization, creating of data communication networks and, in perspective, establishing of fully AI-controlled construction complex are formulated.
Modeling of confined turbulent fluid-particle flows using Eulerian and Lagrangian schemes
NASA Technical Reports Server (NTRS)
Adeniji-Fashola, A.; Chen, C. P.
1990-01-01
Two important aspects of fluid-particulate interaction in dilute gas-particle turbulent flows (the turbulent particle dispersion and the turbulence modulation effects) are addressed, using the Eulerian and Lagrangian modeling approaches to describe the particulate phase. Gradient-diffusion approximations are employed in the Eulerian formulation, while a stochastic procedure is utilized to simulate turbulent dispersion in the Lagrangina formulation. The k-epsilon turbulence model is used to characterize the time and length scales of the continuous phase turbulence. Models proposed for both schemes are used to predict turbulent fully-developed gas-solid vertical pipe flow with reasonable accuracy.
Saturn PRobe Interior and aTmosphere Explorer (SPRITE)
NASA Technical Reports Server (NTRS)
Simon, Amy; Banfield, D.; Atkinson, D.; Atreya, S.; Brinckerhoff, W.; Colaprete, A.; Coustenis, A.; Fletcher, L.; Guillot, T.; Hofstadter, M.;
2016-01-01
The Vision and Voyages Planetary Decadal Survey identified a Saturn Probe mission as one of the high priority New Frontiers mission targets[1]. Many aspects of the Saturn system will not have been fully investigated at the end of the Cassini mission, because of limitations in its implementation and science instrumentation. Fundamental measurements of the interior structure and noble gas abundances of Saturn are needed to better constrain models of Solar System formation, as well as to provide an improved context for exoplanet systems. The SPRITE mission will fulfill the scientific goals of the Decadal Survey Saturn probe mission. It will also provide ground truth for quantities constrained by Cassini and conduct new investigations that improve our understanding of Saturn's interior structure and composition, and by proxy, those of extrasolar giant planets.
OPTIMASS: A package for the minimization of kinematic mass functions with constraints
Cho, Won Sang; Gainer, James S.; Kim, Doojin; ...
2016-01-07
Reconstructed mass variables, such as M 2, M 2C, M* T, and M T2 W, play an essential role in searches for new physics at hadron colliders. The calculation of these variables generally involves constrained minimization in a large parameter space, which is numerically challenging. We provide a C++ code, Optimass, which interfaces with the Minuit library to perform this constrained minimization using the Augmented Lagrangian Method. The code can be applied to arbitrarily general event topologies, thus allowing the user to significantly extend the existing set of kinematic variables. Here, we describe this code, explain its physics motivation, andmore » demonstrate its use in the analysis of the fully leptonic decay of pair-produced top quarks using M 2 variables.« less
Irradiation, microwave and alternative energy-based treatments for low water activity foods
USDA-ARS?s Scientific Manuscript database
There is an increasing recognition of low water activity foods as vectors for human pathogens. Partially or fully dried agricultural commodities, along with modern formulated dried food products, are complex, and designed to meet a variety of nutritional, sensory, and market-oriented goal. This comp...
Garidel, Patrick; Pevestorf, Benjamin; Bahrenburg, Sven
2015-11-01
We studied the stability of freeze-dried therapeutic protein formulations over a range of initial concentrations (from 40 to 160 mg/mL) and employed a variety of formulation strategies (including buffer-free freeze dried formulations, or BF-FDF). Highly concentrated, buffer-free liquid formulations of therapeutic monoclonal antibodies (mAbs) have been shown to be a viable alternative to conventionally buffered preparations. We considered whether it is feasible to use the buffer-free strategy in freeze-dried formulations, as an answer to some of the known drawbacks of conventional buffers. We therefore conducted an accelerated stability study (24 weeks at 40 °C) to assess the feasibility of stabilizing freeze-dried formulations without "classical" buffer components. Factors monitored included pH stability, protein integrity, and protein aggregation. Because the protein solutions are inherently self-buffering, and the system's buffer capacity scales with protein concentration, we included highly concentrated buffer-free freeze-dried formulations in the study. The tested formulations ranged from "fully formulated" (containing both conventional buffer and disaccharide stabilizers) to "buffer-free" (including formulations with only disaccharide lyoprotectant stabilizers) to "excipient-free" (with neither added buffers nor stabilizers). We evaluated the impacts of varying concentrations, buffering schemes, pHs, and lyoprotectant additives. At the end of 24 weeks, no change in pH was observed in any of the buffer-free formulations. Unbuffered formulations were found to have shorter reconstitution times and lower opalescence than buffered formulations. Protein stability was assessed by visual inspection, sub-visible particle analysis, protein monomer content, charge variants analysis, and hydrophobic interaction chromatography. All of these measures found the stability of buffer-free formulations that included a disaccharide stabilizer comparable to buffer-based formulations, especially at protein concentrations up to and including 115 mg/mL. Copyright © 2015 Elsevier B.V. All rights reserved.
Mechanics of Composite Materials for Spacecraft
1992-08-01
this kind lead to a system of linear algebraic equations which involve certain eigenstrain influence coefficients and the given instantaneous...manner. then pa would be the remaining overall strain caused by the eigenstrains pa,; ) is the overall stress caused by pa, in a fully constrained...medium. In the presence of both mechanical overall stress or strain, and uniform I I I • U GEORGE . DVORAK phase eigenstrains , the local fields in the
Capabilities for Constrained Military Operations
2016-12-01
capabilities that have low technology risk and accomplish all of this on a short timeline. I fully endorse all of the recommendations contained in...for the U.S. to address such conflicts. The good news is that The DoD can prevail with inexpensive capabilities that have low technology risk and on a...future actions. The Study took a three-pronged approach to countering potential adversaries’ strategies for waging long-term campaigns for
NASA Astrophysics Data System (ADS)
Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III
2015-12-01
Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.
Surface tension effects on fully developed liquid layer flow over a convex corner
NASA Astrophysics Data System (ADS)
Bhatti, Ifrah; Farid, Saadia; Ullah, Saif; Riaz, Samia; Faryad, Maimoona
2018-04-01
This investigation deals with the study of fully developed liquid layer flow along with surface tension effects, confronting a convex corner in the direction of fluid flow. At the point of interaction, the related equations are formulated using double deck structure and match asymptotic techniques. Linearized solutions for small angle are obtained analytically. The solutions corresponding to similar flow neglecting surface tension effects are also recovered as special case of our general solutions. Finally, the influence of pertinent parameters on the flow, as well as a comparison between models, are shown by graphical illustration.
The exceptional sediment load of fine-grained dispersal systems: Example of the Yellow River, China.
Ma, Hongbo; Nittrouer, Jeffrey A; Naito, Kensuke; Fu, Xudong; Zhang, Yuanfeng; Moodie, Andrew J; Wang, Yuanjian; Wu, Baosheng; Parker, Gary
2017-05-01
Sedimentary dispersal systems with fine-grained beds are common, yet the physics of sediment movement within them remains poorly constrained. We analyze sediment transport data for the best-documented, fine-grained river worldwide, the Huanghe (Yellow River) of China, where sediment flux is underpredicted by an order of magnitude according to well-accepted sediment transport relations. Our theoretical framework, bolstered by field observations, demonstrates that the Huanghe tends toward upper-stage plane bed, yielding minimal form drag, thus markedly enhancing sediment transport efficiency. We present a sediment transport formulation applicable to all river systems with silt to coarse-sand beds. This formulation demonstrates a remarkably sensitive dependence on grain size within a certain narrow range and therefore has special relevance to silt-sand fluvial systems, particularly those affected by dams.
The exceptional sediment load of fine-grained dispersal systems: Example of the Yellow River, China
Ma, Hongbo; Nittrouer, Jeffrey A.; Naito, Kensuke; Fu, Xudong; Zhang, Yuanfeng; Moodie, Andrew J.; Wang, Yuanjian; Wu, Baosheng; Parker, Gary
2017-01-01
Sedimentary dispersal systems with fine-grained beds are common, yet the physics of sediment movement within them remains poorly constrained. We analyze sediment transport data for the best-documented, fine-grained river worldwide, the Huanghe (Yellow River) of China, where sediment flux is underpredicted by an order of magnitude according to well-accepted sediment transport relations. Our theoretical framework, bolstered by field observations, demonstrates that the Huanghe tends toward upper-stage plane bed, yielding minimal form drag, thus markedly enhancing sediment transport efficiency. We present a sediment transport formulation applicable to all river systems with silt to coarse-sand beds. This formulation demonstrates a remarkably sensitive dependence on grain size within a certain narrow range and therefore has special relevance to silt-sand fluvial systems, particularly those affected by dams. PMID:28508078
Quantum field theory of interacting dark matter and dark energy: Dark monodromies
D’Amico, Guido; Hamill, Teresa; Kaloper, Nemanja
2016-11-28
We discuss how to formulate a quantum field theory of dark energy interacting with dark matter. We show that the proposals based on the assumption that dark matter is made up of heavy particles with masses which are very sensitive to the value of dark energy are strongly constrained. Quintessence-generated long-range forces and radiative stability of the quintessence potential require that such dark matter and dark energy are completely decoupled. However, if dark energy and a fraction of dark matter are very light axions, they can have significant mixings which are radiatively stable and perfectly consistent with quantum field theory.more » Such models can naturally occur in multi-axion realizations of monodromies. The mixings yield interesting signatures which are observable and are within current cosmological limits but could be constrained further by future observations« less
A shrinking hypersphere PSO for engineering optimisation problems
NASA Astrophysics Data System (ADS)
Yadav, Anupam; Deep, Kusum
2016-03-01
Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
le Graverend, J.-B.
2018-05-01
A lattice-misfit-dependent damage density function is developed to predict the non-linear accumulation of damage when a thermal jump from 1050 °C to 1200 °C is introduced somewhere in the creep life. Furthermore, a phenomenological model aimed at describing the evolution of the constrained lattice misfit during monotonous creep load is also formulated. The response of the lattice-misfit-dependent plasticity-coupled damage model is compared with the experimental results obtained at 140 and 160 MPa on the first generation Ni-based single crystal superalloy MC2. The comparison reveals that the damage model is well suited at 160 MPa and less at 140 MPa because the transfer of stress to the γ' phase occurs for stresses above 150 MPa which leads to larger variations and, therefore, larger effects of the constrained lattice misfit on the lifetime during thermo-mechanical loading.
Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.
2016-12-01
The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.
Dynamics of non-holonomic systems with stochastic transport
NASA Astrophysics Data System (ADS)
Holm, D. D.; Putkaradze, V.
2018-01-01
This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.; Nguyen, Duc T.
2008-01-01
A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.
OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Gray, Justin S.
2012-01-01
The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.
Quantum field theory of interacting dark matter and dark energy: Dark monodromies
DOE Office of Scientific and Technical Information (OSTI.GOV)
D’Amico, Guido; Hamill, Teresa; Kaloper, Nemanja
We discuss how to formulate a quantum field theory of dark energy interacting with dark matter. We show that the proposals based on the assumption that dark matter is made up of heavy particles with masses which are very sensitive to the value of dark energy are strongly constrained. Quintessence-generated long-range forces and radiative stability of the quintessence potential require that such dark matter and dark energy are completely decoupled. However, if dark energy and a fraction of dark matter are very light axions, they can have significant mixings which are radiatively stable and perfectly consistent with quantum field theory.more » Such models can naturally occur in multi-axion realizations of monodromies. The mixings yield interesting signatures which are observable and are within current cosmological limits but could be constrained further by future observations« less
Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2013-01-01
A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.
Computational strategies in the dynamic simulation of constrained flexible MBS
NASA Technical Reports Server (NTRS)
Amirouche, F. M. L.; Xie, M.
1993-01-01
This research focuses on the computational dynamics of flexible constrained multibody systems. At first a recursive mapping formulation of the kinematical expressions in a minimum dimension as well as the matrix representation of the equations of motion are presented. The method employs Kane's equation, FEM, and concepts of continuum mechanics. The generalized active forces are extended to include the effects of high temperature conditions, such as creep, thermal stress, and elastic-plastic deformation. The time variant constraint relations for rolling/contact conditions between two flexible bodies are also studied. The constraints for validation of MBS simulation of gear meshing contact using a modified Timoshenko beam theory are also presented. The last part deals with minimization of vibration/deformation of the elastic beam in multibody systems making use of time variant boundary conditions. The above methodologies and computational procedures developed are being implemented in a program called DYAMUS.
Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables
DOE Office of Scientific and Technical Information (OSTI.GOV)
DallAnese, Emiliano; Baker, Kyri; Summers, Tyler
This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less
NASA Technical Reports Server (NTRS)
Florschuetz, L. W.; Su, C. C.
1985-01-01
Spanwise average heat fluxes, resolved in the streamwise direction to one stream-wise hole spacing were measured for two-dimensional arrays of circular air jets impinging on a heat transfer surface parallel to the jet orifice plate. The jet flow, after impingement, was constrained to exit in a single direction along the channel formed by the jet orifice plate and heat transfer surface. The crossflow originated from the jets following impingement and an initial crossflow was present that approached the array through an upstream extension of the channel. The regional average heat fluxes are considered as a function of parameters associated with corresponding individual spanwise rows within the array. A linear superposition model was employed to formulate appropriate governing parameters for the individual row domain. The effects of flow history upstream of an individual row domain are also considered. The results are formulated in terms of individual spanwise row parameters. A corresponding set of streamwise resolved heat transfer characteristics formulated in terms of flow and geometric parameters characterizing the overall arrays is described.
Constrained Metric Learning by Permutation Inducing Isometries.
Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle
2016-01-01
The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Xie, Y L; Li, Y P; Huang, G H; Li, Y F; Chen, L R
2011-04-15
In this study, an inexact-chance-constrained water quality management (ICC-WQM) model is developed for planning regional environmental management under uncertainty. This method is based on an integration of interval linear programming (ILP) and chance-constrained programming (CCP) techniques. ICC-WQM allows uncertainties presented as both probability distributions and interval values to be incorporated within a general optimization framework. Complexities in environmental management systems can be systematically reflected, thus applicability of the modeling process can be highly enhanced. The developed method is applied to planning chemical-industry development in Binhai New Area of Tianjin, China. Interval solutions associated with different risk levels of constraint violation have been obtained. They can be used for generating decision alternatives and thus help decision makers identify desired policies under various system-reliability constraints of water environmental capacity of pollutant. Tradeoffs between system benefits and constraint-violation risks can also be tackled. They are helpful for supporting (a) decision of wastewater discharge and government investment, (b) formulation of local policies regarding water consumption, economic development and industry structure, and (c) analysis of interactions among economic benefits, system reliability and pollutant discharges. Copyright © 2011 Elsevier B.V. All rights reserved.
Constrained motion model of mobile robots and its applications.
Zhang, Fei; Xi, Yugeng; Lin, Zongli; Chen, Weidong
2009-06-01
Target detecting and dynamic coverage are fundamental tasks in mobile robotics and represent two important features of mobile robots: mobility and perceptivity. This paper establishes the constrained motion model and sensor model of a mobile robot to represent these two features and defines the k -step reachable region to describe the states that the robot may reach. We show that the calculation of the k-step reachable region can be reduced from that of 2(k) reachable regions with the fixed motion styles to k + 1 such regions and provide an algorithm for its calculation. Based on the constrained motion model and the k -step reachable region, the problems associated with target detecting and dynamic coverage are formulated and solved. For target detecting, the k-step detectable region is used to describe the area that the robot may detect, and an algorithm for detecting a target and planning the optimal path is proposed. For dynamic coverage, the k-step detected region is used to represent the area that the robot has detected during its motion, and the dynamic-coverage strategy and algorithm are proposed. Simulation results demonstrate the efficiency of the coverage algorithm in both convex and concave environments.
Yu, Huapeng; Zhu, Hai; Gao, Dayuan; Yu, Meng; Wu, Wenqi
2015-01-01
The Kalman filter (KF) has always been used to improve north-finding performance under practical conditions. By analyzing the characteristics of the azimuth rotational inertial measurement unit (ARIMU) on a stationary base, a linear state equality constraint for the conventional KF used in the fine north-finding filtering phase is derived. Then, a constrained KF using the state equality constraint is proposed and studied in depth. Estimation behaviors of the concerned navigation errors when implementing the conventional KF scheme and the constrained KF scheme during stationary north-finding are investigated analytically by the stochastic observability approach, which can provide explicit formulations of the navigation errors with influencing variables. Finally, multiple practical experimental tests at a fixed position are done on a postulate system to compare the stationary north-finding performance of the two filtering schemes. In conclusion, this study has successfully extended the utilization of the stochastic observability approach for analytic descriptions of estimation behaviors of the concerned navigation errors, and the constrained KF scheme has demonstrated its superiority over the conventional KF scheme for ARIMU stationary north-finding both theoretically and practically. PMID:25688588
Nouraei, Mehdi; Acosta, Edgar J
2017-06-01
Fully dilutable microemulsions (μEs), used to design self-microemulsifying delivery system (SMEDS), are formulated as concentrate solutions containing oil and surfactants, without water. As water is added to dilute these systems, various μEs are produced (water-swollen reverse micelles, bicontinuous systems, and oil-swollen micelles), without the onset of phase separation. Currently, the formulation dilutable μEs follows a trial and error approach that has had a limited success. The objective of this work is to introduce the use of the hydrophilic-lipophilic-difference (HLD) and net-average-curvature (NAC) frameworks to predict the solubilisation features of ternary phase diagrams of lecithin-linker μEs and the use of these predictions to guide the formulation of dilutable μEs. To this end, the characteristic curvatures (Cc) of soybean lecithin (surfactant), glycerol monooleate (lipophilic linker) and polyglycerol caprylate (hydrophilic linker) and the equivalent alkane carbon number (EACN) of ethyl caprate (oil) were obtained via phase scans with reference surfactant-oil systems. These parameters were then used to calculate the HLD of lecithin-linkers-ethyl caprate microemulsions. The calculated HLDs were able to predict the phase transitions observed in the phase scans. The NAC was then used to fit and predict phase volumes obtained from salinity phase scans, and to predict the solubilisation features of ternary phase diagrams of the lecithin-linker formulations. The HLD-NAC predictions were reasonably accurate, and indicated that the largest region for dilutable μEs was obtained with slightly negative HLD values. The NAC framework also predicted, and explained, the changes in microemulsion properties along dilution lines. Copyright © 2017 Elsevier Inc. All rights reserved.
Solving Connected Subgraph Problems in Wildlife Conservation
NASA Astrophysics Data System (ADS)
Dilkina, Bistra; Gomes, Carla P.
We investigate mathematical formulations and solution techniques for a variant of the Connected Subgraph Problem. Given a connected graph with costs and profits associated with the nodes, the goal is to find a connected subgraph that contains a subset of distinguished vertices. In this work we focus on the budget-constrained version, where we maximize the total profit of the nodes in the subgraph subject to a budget constraint on the total cost. We propose several mixed-integer formulations for enforcing the subgraph connectivity requirement, which plays a key role in the combinatorial structure of the problem. We show that a new formulation based on subtour elimination constraints is more effective at capturing the combinatorial structure of the problem, providing significant advantages over the previously considered encoding which was based on a single commodity flow. We test our formulations on synthetic instances as well as on real-world instances of an important problem in environmental conservation concerning the design of wildlife corridors. Our encoding results in a much tighter LP relaxation, and more importantly, it results in finding better integer feasible solutions as well as much better upper bounds on the objective (often proving optimality or within less than 1% of optimality), both when considering the synthetic instances as well as the real-world wildlife corridor instances.
The effects of strain heating in lithospheric stretching models
NASA Technical Reports Server (NTRS)
Stanton, M.; Hodge, D.; Cozzarelli, F.
1985-01-01
The deformation by stretching of a continental type lithosphere has been formulated so that the problem can be solved by a continuum mechanical approach. The deformation, stress state, and temperature distribution are constrained to satisfy the physical laws of conservation of mass, energy, momentum, and an experimentally defined rheological response. The conservation of energy equation including a term of strain energy dissipation is given. The continental lithosphere is assumed to have the rheology of an isotropic, incompressible, nonlinear viscous, two layered solid.
BFV-BRST analysis of equivalence between noncommutative and ordinary gauge theories
NASA Astrophysics Data System (ADS)
Dayi, O. F.
2000-05-01
Constrained hamiltonian structure of noncommutative gauge theory for the gauge group /U(1) is discussed. Constraints are shown to be first class, although, they do not give an Abelian algebra in terms of Poisson brackets. The related BFV-BRST charge gives a vanishing generalized Poisson bracket by itself due to the associativity of /*-product. Equivalence of noncommutative and ordinary gauge theories is formulated in generalized phase space by using BFV-BRST charge and a solution is obtained. Gauge fixing is discussed.
Application of decomposition techniques to the preliminary design of a transport aircraft
NASA Technical Reports Server (NTRS)
Rogan, J. E.; Kolb, M. A.
1987-01-01
A nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been formulated. A multifaceted decomposition of the optimization problem has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.
Wagemaker, Tais A L; Maia Campos, Patrícia M B G; Shimizu, Kenji; Kyotani, Daiki; Yoshida, Daisuke
2017-08-01
Cutaneous irritants exposure induces an excess of ROS in the skin and can ensue an inflammatory response. Topical antioxidant-based formulations can help to counteract ROS generation. This study evaluated the influence of antioxidant-based topical formulations on the inflammatory response of skin, using a combination of in vivo real-time non-invasive techniques. Nine test areas were defined on each volar forearm of the 25 Japanese volunteers. Measurements were performed before and after treatment with 15μL of a 5% sodium dodecyl sulfate solution and 15μL of the same based formulation or the vehicle with 1% of the antioxidants. Volunteers without antioxidant treatment showed more pronounced erythematous areas. Transepidermal water loss of areas treated with green tea polyphenol (GTP)-based formulation showed fully recovered skin. Skin barrier damage caused by repeated applications of SDS showed characteristic alterations, detectable by in vivo confocal microscopy such as desquamation, spongiosis and inflammatory infiltrates. The majority of confocal microscopy inflammation signs were found in skin without treatment followed by the vehicle. Ascorbyl tetraisopalmitate, Coenzyme Q 10 , GTP- and Resveratrol-based formulations reduced the anti-inflammatory cytokines release and attenuated inflammatory signs. The combination of techniques provides results that highlight the importance of antioxidant-based formulations for rapid skin recovery. Copyright © 2017 Elsevier B.V. All rights reserved.
On pseudo-spectral time discretizations in summation-by-parts form
NASA Astrophysics Data System (ADS)
Ruggiu, Andrea A.; Nordström, Jan
2018-05-01
Fully-implicit discrete formulations in summation-by-parts form for initial-boundary value problems must be invertible in order to provide well functioning procedures. We prove that, under mild assumptions, pseudo-spectral collocation methods for the time derivative lead to invertible discrete systems when energy-stable spatial discretizations are used.
Choice feeding of protein concentrate and grain to organic meat chickens
USDA-ARS?s Scientific Manuscript database
In alternative poultry production, such as free-range and organic, alternative feeding methods may be useful. Instead of a fully formulated diet, a “choice” method offers two feeds, a protein concentrate and a grain, between which birds self-select. This method was common in the past and may allo...
A Low Mach Number Model for Moist Atmospheric Flows
Duarte, Max; Almgren, Ann S.; Bell, John B.
2015-04-01
A low Mach number model for moist atmospheric flows is introduced that accurately incorporates reversible moist processes in flows whose features of interest occur on advective rather than acoustic time scales. Total water is used as a prognostic variable, so that water vapor and liquid water are diagnostically recovered as needed from an exact Clausius–Clapeyron formula for moist thermodynamics. Low Mach number models can be computationally more efficient than a fully compressible model, but the low Mach number formulation introduces additional mathematical and computational complexity because of the divergence constraint imposed on the velocity field. Here in this paper, latentmore » heat release is accounted for in the source term of the constraint by estimating the rate of phase change based on the time variation of saturated water vapor subject to the thermodynamic equilibrium constraint. Finally, the authors numerically assess the validity of the low Mach number approximation for moist atmospheric flows by contrasting the low Mach number solution to reference solutions computed with a fully compressible formulation for a variety of test problems.« less
NASA Astrophysics Data System (ADS)
Lacaze, Guilhem; Oefelein, Joseph
2016-11-01
High-pressure flows are known to be challenging to simulate due to thermodynamic non-linearities occurring in the vicinity of the pseudo-boiling line. This study investigates the origin of this issue by analyzing the behavior of thermodynamic processes at elevated pressure and low temperature. We show that under transcritical conditions, non-linearities significantly amplify numerical errors associated with construction of fluxes. These errors affect the local density and energy balances, which in turn creates pressure oscillations. For that reason, solvers based on a conservative system of equations that transport density and total energy are subject to unphysical pressure variations in gradient regions. These perturbations hinder numerical stability and degrade the accuracy of predictions. To circumvent this problem, the governing system can be reformulated to a pressure-based treatment of energy. We present comparisons between the pressure-based and fully conservative formulations using a progressive set of canonical cases, including a cryogenic turbulent mixing layer at rocket engine conditions. Department of Energy, Office of Science, Basic Energy Sciences Program.
A Low Mach Number Model for Moist Atmospheric Flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duarte, Max; Almgren, Ann S.; Bell, John B.
A low Mach number model for moist atmospheric flows is introduced that accurately incorporates reversible moist processes in flows whose features of interest occur on advective rather than acoustic time scales. Total water is used as a prognostic variable, so that water vapor and liquid water are diagnostically recovered as needed from an exact Clausius–Clapeyron formula for moist thermodynamics. Low Mach number models can be computationally more efficient than a fully compressible model, but the low Mach number formulation introduces additional mathematical and computational complexity because of the divergence constraint imposed on the velocity field. Here in this paper, latentmore » heat release is accounted for in the source term of the constraint by estimating the rate of phase change based on the time variation of saturated water vapor subject to the thermodynamic equilibrium constraint. Finally, the authors numerically assess the validity of the low Mach number approximation for moist atmospheric flows by contrasting the low Mach number solution to reference solutions computed with a fully compressible formulation for a variety of test problems.« less
A fully implicit finite element method for bidomain models of cardiac electromechanics
Dal, Hüsnü; Göktepe, Serdar; Kaliske, Michael; Kuhl, Ellen
2012-01-01
We propose a novel, monolithic, and unconditionally stable finite element algorithm for the bidomain-based approach to cardiac electromechanics. We introduce the transmembrane potential, the extracellular potential, and the displacement field as independent variables, and extend the common two-field bidomain formulation of electrophysiology to a three-field formulation of electromechanics. The intrinsic coupling arises from both excitation-induced contraction of cardiac cells and the deformation-induced generation of intra-cellular currents. The coupled reaction-diffusion equations of the electrical problem and the momentum balance of the mechanical problem are recast into their weak forms through a conventional isoparametric Galerkin approach. As a novel aspect, we propose a monolithic approach to solve the governing equations of excitation-contraction coupling in a fully coupled, implicit sense. We demonstrate the consistent linearization of the resulting set of non-linear residual equations. To assess the algorithmic performance, we illustrate characteristic features by means of representative three-dimensional initial-boundary value problems. The proposed algorithm may open new avenues to patient specific therapy design by circumventing stability and convergence issues inherent to conventional staggered solution schemes. PMID:23175588
Fully coupled methods for multiphase morphodynamics
NASA Astrophysics Data System (ADS)
Michoski, C.; Dawson, C.; Mirabito, C.; Kubatko, E. J.; Wirasaet, D.; Westerink, J. J.
2013-09-01
We present numerical methods for a system of equations consisting of the two dimensional Saint-Venant shallow water equations (SWEs) fully coupled to a completely generalized Exner formulation of hydrodynamically driven sediment discharge. This formulation is implemented by way of a discontinuous Galerkin (DG) finite element method, using a Roe Flux for the advective components and the unified form for the dissipative components. We implement a number of Runge-Kutta time integrators, including a family of strong stability preserving (SSP) schemes, and Runge-Kutta Chebyshev (RKC) methods. A brief discussion is provided regarding implementational details for generalizable computer algebra tokenization using arbitrary algebraic fluxes. We then run numerical experiments to show standard convergence rates, and discuss important mathematical and numerical nuances that arise due to prominent features in the coupled system, such as the emergence of nondifferentiable and sharp zero crossing functions, radii of convergence in manufactured solutions, and nonconservative product (NCP) formalisms. Finally we present a challenging application model concerning hydrothermal venting across metalliferous muds in the presence of chemical reactions occurring in low pH environments.
Superpixel Cut for Figure-Ground Image Segmentation
NASA Astrophysics Data System (ADS)
Yang, Michael Ying; Rosenhahn, Bodo
2016-06-01
Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.
Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram
2018-06-08
Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.
2007-03-01
potential of moving closer to the goal of a fully service-oriented GIG by allowing even computing - and bandwidth-constrained elements to participate...the functionality provided by core network assets with relatively unlimited bandwidth and computing resources. Finally, the nature of information is...the Department of Defense is a requirement for ubiquitous computer connectivity. An espoused vehicle for delivering that ubiquity is the Global
Stability and Convergence of Underintegrated Finite Element Approximations
NASA Technical Reports Server (NTRS)
Oden, J. T.
1984-01-01
The effects of underintegration on the numerical stability and convergence characteristics of certain classes of finite element approximations were analyzed. Particular attention is given to hourglassing instabilities that arise from underintegrating the stiffness matrix entries and checkerboard instabilities that arise from underintegrating constrain terms such as those arising from incompressibility conditions. A fundamental result reported here is the proof that the fully integrated stiffness is restored in some cases through a post-processing operation.
Iwasaki, Toshiki; Nelson, Jonathan M.; Shimizu, Yasuyuki; Parker, Gary
2017-01-01
Asymptotic characteristics of the transport of bed load tracer particles in rivers have been described by advection-dispersion equations. Here we perform numerical simulations designed to study the role of free bars, and more specifically single-row alternate bars, on streamwise tracer particle dispersion. In treating the conservation of tracer particle mass, we use two alternative formulations for the Exner equation of sediment mass conservation: the flux-based formulation, in which bed elevation varies with the divergence of the bed load transport rate, and the entrainment-based formulation, in which bed elevation changes with the net deposition rate. Under the condition of no net bed aggradation/degradation, a 1-D flux-based deterministic model that does not describe free bars yields no streamwise dispersion. The entrainment-based 1-D formulation, on the other hand, models stochasticity via the probability density function (PDF) of particle step length, and as a result does show tracer dispersion. When the formulation is generalized to 2-D to include free alternate bars, however, both models yield almost identical asymptotic advection-dispersion characteristics, in which streamwise dispersion is dominated by randomness inherent in free bar morphodynamics. This randomness can result in a heavy-tailed PDF of waiting time. In addition, migrating bars may constrain the travel distance through temporary burial, causing a thin-tailed PDF of travel distance. The superdiffusive character of streamwise particle dispersion predicted by the model is attributable to the interaction of these two effects.
NASA Astrophysics Data System (ADS)
Iwasaki, Toshiki; Nelson, Jonathan; Shimizu, Yasuyuki; Parker, Gary
2017-04-01
Asymptotic characteristics of the transport of bed load tracer particles in rivers have been described by advection-dispersion equations. Here we perform numerical simulations designed to study the role of free bars, and more specifically single-row alternate bars, on streamwise tracer particle dispersion. In treating the conservation of tracer particle mass, we use two alternative formulations for the Exner equation of sediment mass conservation: the flux-based formulation, in which bed elevation varies with the divergence of the bed load transport rate, and the entrainment-based formulation, in which bed elevation changes with the net deposition rate. Under the condition of no net bed aggradation/degradation, a 1-D flux-based deterministic model that does not describe free bars yields no streamwise dispersion. The entrainment-based 1-D formulation, on the other hand, models stochasticity via the probability density function (PDF) of particle step length, and as a result does show tracer dispersion. When the formulation is generalized to 2-D to include free alternate bars, however, both models yield almost identical asymptotic advection-dispersion characteristics, in which streamwise dispersion is dominated by randomness inherent in free bar morphodynamics. This randomness can result in a heavy-tailed PDF of waiting time. In addition, migrating bars may constrain the travel distance through temporary burial, causing a thin-tailed PDF of travel distance. The superdiffusive character of streamwise particle dispersion predicted by the model is attributable to the interaction of these two effects.
Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites
NASA Technical Reports Server (NTRS)
Turner, Travis L.
2001-01-01
This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.
Resonant Raman spectra of diindenoperylene thin films
NASA Astrophysics Data System (ADS)
Scholz, R.; Gisslén, L.; Schuster, B.-E.; Casu, M. B.; Chassé, T.; Heinemeyer, U.; Schreiber, F.
2011-01-01
Resonant and preresonant Raman spectra obtained on diindenoperylene (DIP) thin films are interpreted with calculations of the deformation of a relaxed excited molecule with density functional theory (DFT). The comparison of excited state geometries based on time-dependent DFT or on a constrained DFT scheme with observed absorption spectra of dissolved DIP reveals that the deformation pattern deduced from constrained DFT is more reliable. Most observed Raman peaks can be assigned to calculated A_g-symmetric breathing modes of DIP or their combinations. As the position of one of the laser lines used falls into a highly structured absorption band, we have carefully analyzed the Raman excitation profile arising from the frequency dependence of the dielectric tensor. This procedure gives Raman cross sections in good agreement with the observed relative intensities, both in the fully resonant and in the preresonant case.
Geometric constrained variational calculus. III: The second variation (Part II)
NASA Astrophysics Data System (ADS)
Massa, Enrico; Luria, Gianvittorio; Pagani, Enrico
2016-03-01
The problem of minimality for constrained variational calculus is analyzed within the class of piecewise differentiable extremaloids. A fully covariant representation of the second variation of the action functional based on a family of local gauge transformations of the original Lagrangian is proposed. The necessity of pursuing a local adaptation process, rather than the global one described in [1] is seen to depend on the value of certain scalar attributes of the extremaloid, here called the corners’ strengths. On this basis, both the necessary and the sufficient conditions for minimality are worked out. In the discussion, a crucial role is played by an analysis of the prolongability of the Jacobi fields across the corners. Eventually, in the appendix, an alternative approach to the concept of strength of a corner, more closely related to Pontryagin’s maximum principle, is presented.
Resonant Raman spectra of diindenoperylene thin films.
Scholz, R; Gisslén, L; Schuster, B-E; Casu, M B; Chassé, T; Heinemeyer, U; Schreiber, F
2011-01-07
Resonant and preresonant Raman spectra obtained on diindenoperylene (DIP) thin films are interpreted with calculations of the deformation of a relaxed excited molecule with density functional theory (DFT). The comparison of excited state geometries based on time-dependent DFT or on a constrained DFT scheme with observed absorption spectra of dissolved DIP reveals that the deformation pattern deduced from constrained DFT is more reliable. Most observed Raman peaks can be assigned to calculated A(g)-symmetric breathing modes of DIP or their combinations. As the position of one of the laser lines used falls into a highly structured absorption band, we have carefully analyzed the Raman excitation profile arising from the frequency dependence of the dielectric tensor. This procedure gives Raman cross sections in good agreement with the observed relative intensities, both in the fully resonant and in the preresonant case.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.
Unified solver for fluid dynamics and aeroacoustics in isentropic gas flows
NASA Astrophysics Data System (ADS)
Pont, Arnau; Codina, Ramon; Baiges, Joan; Guasch, Oriol
2018-06-01
The high computational cost of solving numerically the fully compressible Navier-Stokes equations, together with the poor performance of most numerical formulations for compressible flow in the low Mach number regime, has led to the necessity for more affordable numerical models for Computational Aeroacoustics. For low Mach number subsonic flows with neither shocks nor thermal coupling, both flow dynamics and wave propagation can be considered isentropic. Therefore, a joint isentropic formulation for flow and aeroacoustics can be devised which avoids the need for segregating flow and acoustic scales. Under these assumptions density and pressure fluctuations are directly proportional, and a two field velocity-pressure compressible formulation can be derived as an extension of an incompressible solver. Moreover, the linear system of equations which arises from the proposed isentropic formulation is better conditioned than the homologous incompressible one due to the presence of a pressure time derivative. Similarly to other compressible formulations the prescription of boundary conditions will have to deal with the backscattering of acoustic waves. In this sense, a separated imposition of boundary conditions for flow and acoustic scales which allows the evacuation of waves through Dirichlet boundaries without using any tailored damping model will be presented.
Structural design using equilibrium programming formulations
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1995-01-01
Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.
Abd, Eman; Namjoshi, Sarika; Mohammed, Yousuf H; Roberts, Michael S; Grice, Jeffrey E
2016-01-01
We examined the extent of skin permeation enhancement of the hydrophilic drug caffeine and lipophilic drug naproxen applied in nanoemulsions incorporating skin penetration enhancers. Infinite doses of fully characterized oil-in-water nanoemulsions containing the skin penetration enhancers oleic acid or eucalyptol as oil phases and caffeine (3%) or naproxen (2%) were applied to human epidermal membranes in Franz diffusion cells, along with aqueous control solutions. Caffeine and naproxen fluxes were determined over 8 h. Solute solubility in the formulations and in the stratum corneum (SC), as well as the uptake of product components into the SC were measured. The nanoemulsions significantly enhanced the skin penetration of caffeine and naproxen, compared to aqueous control solutions. Caffeine maximum flux enhancement was associated with a synergistic increase in both caffeine SC solubility and skin diffusivity, whereas a formulation-increased solubility in the SC was the dominant determinant for increased naproxen fluxes. Enhancements in SC solubility were related to the uptake of the formulation excipients containing the active compounds into the SC. Enhanced skin penetration in these systems is largely driven by uptake of formulation excipients containing the active compounds into the SC with impacts on SC solubility and diffusivity.
NASA Astrophysics Data System (ADS)
Li, Duo; Liu, Yajing
2017-04-01
Along-strike segmentation of slow-slip events (SSEs) and nonvolcanic tremors in Cascadia may reflect heterogeneities of the subducting slab or overlying continental lithosphere. However, the nature behind this segmentation is not fully understood. We develop a 3-D model for episodic SSEs in northern and central Cascadia, incorporating both seismological and gravitational observations to constrain the heterogeneities in the megathrust fault properties. The 6 year automatically detected tremors are used to constrain the rate-state friction parameters. The effective normal stress at SSE depths is constrained by along-margin free-air and Bouguer gravity anomalies. The along-strike variation in the long-term plate convergence rate is also taken into consideration. Simulation results show five segments of ˜Mw6.0 SSEs spontaneously appear along the strike, correlated to the distribution of tremor epicenters. Modeled SSE recurrence intervals are equally comparable to GPS observations using both types of gravity anomaly constraints. However, the model constrained by free-air anomaly does a better job in reproducing the cumulative slip as well as more consistent surface displacements with GPS observations. The modeled along-strike segmentation represents the averaged slip release over many SSE cycles, rather than permanent barriers. Individual slow-slip events can still propagate across the boundaries, which may cause interactions between adjacent SSEs, as observed in time-dependent GPS inversions. In addition, the moment-duration scaling is sensitive to the selection of velocity criteria for determining when SSEs occur. Hence, the detection ability of the current GPS network should be considered in the interpretation of slow earthquake source parameter scaling relations.
Nonrecursive formulations of multibody dynamics and concurrent multiprocessing
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.; Menon, Ramesh
1993-01-01
Since the late 1980's, research in recursive formulations of multibody dynamics has flourished. Historically, much of this research can be traced to applications of low dimensionality in mechanism and vehicle dynamics. Indeed, there is little doubt that recursive order N methods are the method of choice for this class of systems. This approach has the advantage that a minimal number of coordinates are utilized, parallelism can be induced for certain system topologies, and the method is of order N computational cost for systems of N rigid bodies. Despite the fact that many authors have dismissed redundant coordinate formulations as being of order N(exp 3), and hence less attractive than recursive formulations, we present recent research that demonstrates that at least three distinct classes of redundant, nonrecursive multibody formulations consistently achieve order N computational cost for systems of rigid and/or flexible bodies. These formulations are as follows: (1) the preconditioned range space formulation; (2) penalty methods; and (3) augmented Lagrangian methods for nonlinear multibody dynamics. The first method can be traced to its foundation in equality constrained quadratic optimization, while the last two methods have been studied extensively in the context of coercive variational boundary value problems in computational mechanics. Until recently, however, they have not been investigated in the context of multibody simulation, and present theoretical questions unique to nonlinear dynamics. All of these nonrecursive methods have additional advantages with respect to recursive order N methods: (1) the formalisms retain the highly desirable order N computational cost; (2) the techniques are amenable to concurrent simulation strategies; (3) the approaches do not depend upon system topology to induce concurrency; and (4) the methods can be derived to balance the computational load automatically on concurrent multiprocessors. In addition to the presentation of the fundamental formulations, this paper presents new theoretical results regarding the rate of convergence of order N constraint stabilization schemes associated with the newly introduced class of methods.
Autonomous learning based on cost assumptions: theoretical studies and experiments in robot control.
Ribeiro, C H; Hemerly, E M
2000-02-01
Autonomous learning techniques are based on experience acquisition. In most realistic applications, experience is time-consuming: it implies sensor reading, actuator control and algorithmic update, constrained by the learning system dynamics. The information crudeness upon which classical learning algorithms operate make such problems too difficult and unrealistic. Nonetheless, additional information for facilitating the learning process ideally should be embedded in such a way that the structural, well-studied characteristics of these fundamental algorithms are maintained. We investigate in this article a more general formulation of the Q-learning method that allows for a spreading of information derived from single updates towards a neighbourhood of the instantly visited state and converges to optimality. We show how this new formulation can be used as a mechanism to safely embed prior knowledge about the structure of the state space, and demonstrate it in a modified implementation of a reinforcement learning algorithm in a real robot navigation task.
Snee, Lawrence W.
2002-01-01
40Ar/39Ar geochronology is an experimentally robust and versatile method for constraining time and temperature in geologic processes. The argon method is the most broadly applied in mineral-deposit studies. Standard analytical methods and formulations exist, making the fundamentals of the method well defined. A variety of graphical representations exist for evaluating argon data. A broad range of minerals found in mineral deposits, alteration zones, and host rocks commonly is analyzed to provide age, temporal duration, and thermal conditions for mineralization events and processes. All are discussed in this report. The usefulness of and evolution of the applicability of the method are demonstrated in studies of the Panasqueira, Portugal, tin-tungsten deposit; the Cornubian batholith and associated mineral deposits, southwest England; the Red Mountain intrusive system and associated Urad-Henderson molybdenum deposits; and the Eastern Goldfields Province, Western Australia.
Ghost artifact cancellation using phased array processing.
Kellman, P; McVeigh, E R
2001-08-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples.
Ghost Artifact Cancellation Using Phased Array Processing
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples. PMID:11477638
Extension of non-linear beam models with deformable cross sections
NASA Astrophysics Data System (ADS)
Sokolov, I.; Krylov, S.; Harari, I.
2015-12-01
Geometrically exact beam theory is extended to allow distortion of the cross section. We present an appropriate set of cross-section basis functions and provide physical insight to the cross-sectional distortion from linear elastostatics. The beam formulation in terms of material (back-rotated) beam internal force resultants and work-conjugate kinematic quantities emerges naturally from the material description of virtual work of constrained finite elasticity. The inclusion of cross-sectional deformation allows straightforward application of three-dimensional constitutive laws in the beam formulation. Beam counterparts of applied loads are expressed in terms of the original three-dimensional data. Special attention is paid to the treatment of the applied stress, keeping in mind applications such as hydrogel actuators under environmental stimuli or devices made of electroactive polymers. Numerical comparisons show the ability of the beam model to reproduce finite elasticity results with good efficiency.
Equilibrium Conformations of Concentric-tube Continuum Robots
Rucker, D. Caleb; Webster, Robert J.; Chirikjian, Gregory S.; Cowan, Noah J.
2013-01-01
Robots consisting of several concentric, preshaped, elastic tubes can work dexterously in narrow, constrained, and/or winding spaces, as are commonly found in minimally invasive surgery. Previous models of these “active cannulas” assume piecewise constant precurvature of component tubes and neglect torsion in curved sections of the device. In this paper we develop a new coordinate-free energy formulation that accounts for general preshaping of an arbitrary number of component tubes, and which explicitly includes both bending and torsion throughout the device. We show that previously reported models are special cases of our formulation, and then explore in detail the implications of torsional flexibility for the special case of two tubes. Experiments demonstrate that this framework is more descriptive of physical prototype behavior than previous models; it reduces model prediction error by 82% over the calibrated bending-only model, and 17% over the calibrated transmissional torsion model in a set of experiments. PMID:25125773
NASA Technical Reports Server (NTRS)
Mehr, Ali Farhang; Tumer, Irem
2005-01-01
In this paper, we will present a new methodology that measures the "worth" of deploying an additional testing instrument (sensor) in terms of the amount of information that can be retrieved from such measurement. This quantity is obtained using a probabilistic model of RLV's that has been partially developed in the NASA Ames Research Center. A number of correlated attributes are identified and used to obtain the worth of deploying a sensor in a given test point from an information-theoretic viewpoint. Once the information-theoretic worth of sensors is formulated and incorporated into our general model for IHM performance, the problem can be formulated as a constrained optimization problem where reliability and operational safety of the system as a whole is considered. Although this research is conducted specifically for RLV's, the proposed methodology in its generic form can be easily extended to other domains of systems health monitoring.
NASA Astrophysics Data System (ADS)
Kleidon, Axel; Renner, Maik
2016-04-01
The soil-plant-atmosphere system is a complex system that is strongly shaped by interactions between the physical environment and vegetation. This complexity appears to demand equally as complex models to fully capture the dynamics of the coupled system. What we describe here is an alternative approach that is based on thermodynamics and which allows for comparatively simple formulations free of empirical parameters by assuming that the system is so complex that its emergent dynamics are only constrained by the thermodynamics of the system. This approach specifically makes use of the second law of thermodynamics, a fundamental physical law that is typically not being considered in Earth system science. Its relevance to land surface processes is that it fundamentally sets a direction as well as limits to energy conversions and associated rates of mass exchange, but it requires us to formulate land surface processes as thermodynamic processes that are driven by energy conversions. We describe an application of this approach to the surface energy balance partitioning at the diurnal scale. In this application the turbulent heat fluxes of sensible and latent heat are described as the result of a convective heat engine that is driven by solar radiative heating of the surface and that operates at its thermodynamic limit. The predicted fluxes from this approach compare very well to observations at several sites. This suggests that the turbulent exchange fluxes between the surface and the atmosphere operate at their thermodynamic limit, so that thermodynamics imposes a relevant constraint to the land surface-atmosphere system. Yet, thermodynamic limits do not entirely determine the soil-plant-atmosphere system because vegetation affects these limits, for instance by affecting the magnitude of surface heating by absorption of solar radiation in the canopy layer. These effects are likely to make the conditions at the land surface more favorable for photosynthetic activity, which then links this thermodynamic approach to optimality in vegetation. We also contrast this approach to common, semi-empirical approaches of surface-atmosphere exchange and discuss how thermodynamics may set a broader range of transport limitations and optimality in the soil-plant-atmosphere system.
Challenging the bioethical application of the autonomy principle within multicultural societies.
Fagan, Andrew
2004-01-01
This article critically re-examines the application of the principle of patient autonomy within bioethics. In complex societies such as those found in North America and Europe health care professionals are increasingly confronted by patients from diverse ethnic, cultural, and religious backgrounds. This affects the relationship between clinicians and patients to the extent that patients' deliberations upon the proposed courses of treatment can, in various ways and to varying extents, be influenced by their ethnic, cultural, and religious commitments. The principle of patient autonomy is the main normative constraint imposed upon medical treatment. Bioethicists typically appeal to the principle of patient autonomy as a means for generally attempting to resolve conflict between patients and clinicians. In recent years a number of bioethicists have responded to the condition of multiculturalism by arguing that the autonomy principle provides the basis for a common moral discourse capable of regulating the relationship between clinicians and patients in those situations where patients' beliefs and commitments do or may contradict the ethos of biomedicine. This article challenges that claim. I argue that the precise manner in which the autonomy principle is philosophically formulated within such accounts prohibits bioethicists' deployment of autonomy as a core ideal for a common moral discourse within multicultural societies. The formulation of autonomy underlying such accounts cannot be extended to simply assimilate individuals' most fundamental religious and cultural commitments and affiliations per se. I challenge the assumption that respecting prospective patients' fundamental religious and cultural commitments is necessarily always compatible with respecting their autonomy. I argue that the character of some peoples' relationship with their cultural or religious community acts to significantly constrain the possibilities for acting autonomously. The implication is clear. The autonomy principle may be presently invalidly applied in certain circumstances because the conditions for the exercise of autonomy have not been fully or even adequately satisfied. This is a controversial claim. The precise terms of my argument, while addressing the specific application of the autonomy principle within bioethics, will resonate beyond this sphere and raises questions for attempts to establish a common moral discourse upon the ideal of personal autonomy within multicultural societies generally.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
Heat transfer model and finite element formulation for simulation of selective laser melting
NASA Astrophysics Data System (ADS)
Roy, Souvik; Juha, Mario; Shephard, Mark S.; Maniatty, Antoinette M.
2017-10-01
A novel approach and finite element formulation for modeling the melting, consolidation, and re-solidification process that occurs in selective laser melting additive manufacturing is presented. Two state variables are introduced to track the phase (melt/solid) and the degree of consolidation (powder/fully dense). The effect of the consolidation on the absorption of the laser energy into the material as it transforms from a porous powder to a dense melt is considered. A Lagrangian finite element formulation, which solves the governing equations on the unconsolidated reference configuration is derived, which naturally considers the effect of the changing geometry as the powder melts without needing to update the simulation domain. The finite element model is implemented into a general-purpose parallel finite element solver. Results are presented comparing to experimental results in the literature for a single laser track with good agreement. Predictions for a spiral laser pattern are also shown.
Coherent states formulation of polymer field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Xingkun; Villet, Michael C.; Materials Research Laboratory, University of California, Santa Barbara, California 93106
2014-01-14
We introduce a stable and efficient complex Langevin (CL) scheme to enable the first direct numerical simulations of the coherent-states (CS) formulation of polymer field theory. In contrast with Edwards’ well-known auxiliary-field (AF) framework, the CS formulation does not contain an embedded nonlinear, non-local, implicit functional of the auxiliary fields, and the action of the field theory has a fully explicit, semi-local, and finite-order polynomial character. In the context of a polymer solution model, we demonstrate that the new CS-CL dynamical scheme for sampling fluctuations in the space of coherent states yields results in good agreement with now-standard AF-CL simulations.more » The formalism is potentially applicable to a broad range of polymer architectures and may facilitate systematic generation of trial actions for use in coarse-graining and numerical renormalization-group studies.« less
Perspectives on law enforcement in recreation areas
Lawrence C. Hadley
1971-01-01
The nature and scope of law-enforcement problems in the National Park System are of increasing concern to park and recreation area managers. A positive response by management in terms of formulating and executing a fully professional and effective enforcement program is vital for sustaining public confidence that Parks are safe for individual and family use. Law...
Edwards, Darrin C.; Metz, Charles E.
2012-01-01
Although a fully general extension of ROC analysis to classification tasks with more than two classes has yet to be developed, the potential benefits to be gained from a practical performance evaluation methodology for classification tasks with three classes have motivated a number of research groups to propose methods based on constrained or simplified observer or data models. Here we consider an ideal observer in a task with underlying data drawn from three univariate normal distributions. We investigate the behavior of the resulting ideal observer’s decision variables and ROC surface. In particular, we show that the pair of ideal observer decision variables is constrained to a parametric curve in two-dimensional likelihood ratio space, and that the decision boundary line segments used by the ideal observer can intersect this curve in at most six places. From this, we further show that the resulting ROC surface has at most four degrees of freedom at any point, and not the five that would be required, in general, for a surface in a six-dimensional space to be non-degenerate. In light of the difficulties we have previously pointed out in generalizing the well-known area under the ROC curve performance metric to tasks with three or more classes, the problem of developing a suitable and fully general performance metric for classification tasks with three or more classes remains unsolved. PMID:23162165
Fully constrained Majorana neutrino mass matrices using \\varvec{Σ(72× 3)}
NASA Astrophysics Data System (ADS)
Krishnan, R.; Harrison, P. F.; Scott, W. G.
2018-01-01
In 2002, two neutrino mixing ansatze having trimaximally mixed middle (ν _2) columns, namely tri-chi-maximal mixing ( {T}χ {M}) and tri-phi-maximal mixing ( {T}φ {M}), were proposed. In 2012, it was shown that {T}χ {M} with χ =± π /16 as well as {T}φ {M} with φ = ± π /16 leads to the solution, sin ^2 θ _{13} = 2/3 sin ^2 π /16, consistent with the latest measurements of the reactor mixing angle, θ _{13}. To obtain {T}χ {M}_{(χ =± π /16)} and {T}φ {M}_{(φ =± π /16)}, the type I see-saw framework with fully constrained Majorana neutrino mass matrices was utilised. These mass matrices also resulted in the neutrino mass ratios, m_1:m_2:m_3=( 2+√{2}) /1+√{2(2+√{2)}}:1:( 2+√{2}) /-1+√{2(2+√{2)}}. In this paper we construct a flavour model based on the discrete group Σ (72× 3) and obtain the aforementioned results. A Majorana neutrino mass matrix (a symmetric 3× 3 matrix with six complex degrees of freedom) is conveniently mapped into a flavon field transforming as the complex six-dimensional representation of Σ (72× 3). Specific vacuum alignments of the flavons are used to arrive at the desired mass matrices.
NASA Astrophysics Data System (ADS)
Zhu, Minjie; Scott, Michael H.
2017-07-01
Accurate and efficient response sensitivities for fluid-structure interaction (FSI) simulations are important for assessing the uncertain response of coastal and off-shore structures to hydrodynamic loading. To compute gradients efficiently via the direct differentiation method (DDM) for the fully incompressible fluid formulation, approximations of the sensitivity equations are necessary, leading to inaccuracies of the computed gradients when the geometry of the fluid mesh changes rapidly between successive time steps or the fluid viscosity is nonzero. To maintain accuracy of the sensitivity computations, a quasi-incompressible fluid is assumed for the response analysis of FSI using the particle finite element method and DDM is applied to this formulation, resulting in linearized equations for the response sensitivity that are consistent with those used to compute the response. Both the response and the response sensitivity can be solved using the same unified fractional step method. FSI simulations show that although the response using the quasi-incompressible and incompressible fluid formulations is similar, only the quasi-incompressible approach gives accurate response sensitivity for viscous, turbulent flows regardless of time step size.
Cowley, Nicola L.; Forbes, Sarah; Amézquita, Alejandro; McClure, Peter; Humphreys, Gavin J.
2015-01-01
Risk assessments of the potential for microbicides to select for reduced bacterial susceptibility have been based largely on data generated through the exposure of bacteria to microbicides in aqueous solution. Since microbicides are normally formulated with multiple excipients, we have investigated the effect of formulation on antimicrobial activity and the induction of bacterial insusceptibility. We tested 8 species of bacteria (7 genera) before and after repeated exposure (14 passages), using a previously validated gradient plating system, for their susceptibilities to the microbicides benzalkonium chloride, benzisothiozolinone, chlorhexidine, didecyldimethyl ammonium chloride, DMDM-hydantoin, polyhexamethylene biguanide, thymol, and triclosan in aqueous solution (nonformulated) and in formulation with excipients often deployed in consumer products. Susceptibilities were also assessed following an additional 14 passages without microbicide to determine the stability of any susceptibility changes. MICs and minimum bactericidal concentrations (MBC) were on average 11-fold lower for formulated microbicides than for nonformulated microbicides. After exposure to the antimicrobial compounds, of 72 combinations of microbicide and bacterium there were 19 ≥4-fold (mean, 8-fold) increases in MIC for nonformulated and 8 ≥4-fold (mean, 2-fold) increases in MIC for formulated microbicides. Furthermore, there were 20 ≥4-fold increases in MBC (mean, 8-fold) for nonformulated and 10 ≥4-fold (mean, 2-fold) increases in MBC for formulated microbicides. Susceptibility decreases fully or partially reverted back to preexposure values for 49% of MICs and 72% of MBCs after further passage. In summary, formulated microbicides exhibited greater antibacterial potency than unformulated actives and susceptibility decreases after repeated exposure were lower in frequency and extent. PMID:26253662
Naderkhani, Elenaz; Erber, Astrid; Škalko-Basnet, Nataša; Flaten, Gøril Eide
2014-02-01
The antiviral drug acyclovir (ACV) suffers from poor solubility both in lipophilic and hydrophilic environment, leading to low and highly variable bioavailability. To overcome these limitations, this study aimed at designing mucoadhesive ACV-containing liposomes to improve its permeability. Liposomes were prepared from egg phosphatidylcholine (E-PC) and E-PC/egg phosphatidylglycerol (E-PC/E-PG) and their surfaces coated with Carbopol. All liposomal formulations were fully characterized and for the first time the phospholipid vesicle-based permeation assay (PVPA) was used for testing in vitro permeability of drug from mucoadhesive liposome formulations. The negatively charged E-PC/E-PG liposomes could encapsulate more ACV than neutral E-PC liposomes. Coating with Carbopol increased the entrapment in the neutral E-PC liposomes. The incorporation of ACV into liposomes exhibited significant increase in its in vitro permeability, compared with its aqueous solution. The neutral E-PC liposomal formulations exhibited higher ACV permeability values compared with charged E-PC/E-PG formulations. Coating with Carbopol significantly enhanced the permeability from the E-PC/E-PG liposomes, as well as sonicated E-PC liposomes, which showed the highest permeability of all tested formulations. The increased permeability was according to the formulations' mucoadhesive properties. This indicates that the PVPA is suitable to distinguish between permeability of ACV from different mucoadhesive liposome formulations developed for various routes of administration. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Le Chatelier's principle with multiple relaxation channels
NASA Astrophysics Data System (ADS)
Gilmore, R.; Levine, R. D.
1986-05-01
Le Chatelier's principle is discussed within the constrained variational approach to thermodynamics. The formulation is general enough to encompass systems not in thermal (or chemical) equilibrium. Particular attention is given to systems with multiple constraints which can be relaxed. The moderation of the initial perturbation increases as additional constraints are removed. This result is studied in particular when the (coupled) relaxation channels have widely different time scales. A series of inequalities is derived which describes the successive moderation as each successive relaxation channel opens up. These inequalities are interpreted within the metric-geometry representation of thermodynamics.
Minerva: Cylindrical coordinate extension for Athena
NASA Astrophysics Data System (ADS)
Skinner, M. Aaron; Ostriker, Eve C.
2013-02-01
Minerva is a cylindrical coordinate extension of the Athena astrophysical MHD code of Stone, Gardiner, Teuben, and Hawley. The extension follows the approach of Athena's original developers and has been designed to alter the existing Cartesian-coordinates code as minimally and transparently as possible. The numerical equations in cylindrical coordinates are formulated to maintain consistency with constrained transport (CT), a central feature of the Athena algorithm, while making use of previously implemented code modules such as the Riemann solvers. Angular momentum transport, which is critical in astrophysical disk systems dominated by rotation, is treated carefully.
Control of linear uncertain systems utilizing mismatched state observers
NASA Technical Reports Server (NTRS)
Goldstein, B.
1972-01-01
The control of linear continuous dynamical systems is investigated as a problem of limited state feedback control. The equations which describe the structure of an observer are developed constrained to time-invarient systems. The optimal control problem is formulated, accounting for the uncertainty in the design parameters. Expressions for bounds on closed loop stability are also developed. The results indicate that very little uncertainty may be tolerated before divergence occurs in the recursive computation algorithms, and the derived stability bound yields extremely conservative estimates of regions of allowable parameter variations.
A link representation for gravity amplitudes
NASA Astrophysics Data System (ADS)
He, Song
2013-10-01
We derive a link representation for all tree amplitudes in supergravity, from a recent conjecture by Cachazo and Skinner. The new formula explicitly writes amplitudes as contour integrals over constrained link variables, with an integrand naturally expressed in terms of determinants, or equivalently tree diagrams. Important symmetries of the amplitude, such as supersymmetry, parity and (partial) permutation invariance, are kept manifest in the formulation. We also comment on rewriting the formula in a GL( k)-invariant manner, which may serve as a starting point for the generalization to possible Grassmannian contour integrals.
Multispecies diffusion models: A study of uranyl species diffusion
NASA Astrophysics Data System (ADS)
Liu, Chongxuan; Shang, Jianying; Zachara, John M.
2011-12-01
Rigorous numerical description of multispecies diffusion requires coupling of species, charge, and aqueous and surface complexation reactions that collectively affect diffusive fluxes. The applicability of a fully coupled diffusion model is, however, often constrained by the availability of species self-diffusion coefficients, as well as by computational complication in imposing charge conservation. In this study, several diffusion models with variable complexity in charge and species coupling were formulated and compared to describe reactive multispecies diffusion in groundwater. Diffusion of uranyl [U(VI)] species was used as an example in demonstrating the effectiveness of the models in describing multispecies diffusion. Numerical simulations found that a diffusion model with a single, common diffusion coefficient for all species was sufficient to describe multispecies U(VI) diffusion under a steady state condition of major chemical composition, but not under transient chemical conditions. Simulations revealed that for multispecies U(VI) diffusion under transient chemical conditions, a fully coupled diffusion model could be well approximated by a component-based diffusion model when the diffusion coefficient for each chemical component was properly selected. The component-based diffusion model considers the difference in diffusion coefficients between chemical components, but not between the species within each chemical component. This treatment significantly enhanced computational efficiency at the expense of minor charge conservation. The charge balance in the component-based diffusion model can be enforced, if necessary, by adding a secondary migration term resulting from model simplification. The effect of ion activity coefficient gradients on multispecies diffusion is also discussed. The diffusion models were applied to describe U(VI) diffusive mass transfer in intragranular domains in two sediments collected from U.S. Department of Energy's Hanford 300A, where intragranular diffusion is a rate-limiting process controlling U(VI) adsorption and desorption. The grain-scale reactive diffusion model was able to describe U(VI) adsorption/desorption kinetics that had been previously described using a semiempirical, multirate model. Compared with the multirate model, the diffusion models have the advantage to provide spatiotemporal speciation evolution within the diffusion domains.
PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratiomore » (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.« less
Al Nasr, Kamal; Ranjan, Desh; Zubair, Mohammad; Chen, Lin; He, Jing
2014-01-01
Electron cryomicroscopy is becoming a major experimental technique in solving the structures of large molecular assemblies. More and more three-dimensional images have been obtained at the medium resolutions between 5 and 10 Å. At this resolution range, major α-helices can be detected as cylindrical sticks and β-sheets can be detected as plain-like regions. A critical question in de novo modeling from cryo-EM images is to determine the match between the detected secondary structures from the image and those on the protein sequence. We formulate this matching problem into a constrained graph problem and present an O(Δ(2)N(2)2(N)) algorithm to this NP-Hard problem. The algorithm incorporates the dynamic programming approach into a constrained K-shortest path algorithm. Our method, DP-TOSS, has been tested using α-proteins with maximum 33 helices and α-β proteins up to five helices and 12 β-strands. The correct match was ranked within the top 35 for 19 of the 20 α-proteins and all nine α-β proteins tested. The results demonstrate that DP-TOSS improves accuracy, time and memory space in deriving the topologies of the secondary structure elements for proteins with a large number of secondary structures and a complex skeleton.
Benchmarking optimization software with COPS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, E.D.; More, J.J.
2001-01-08
The COPS test set provides a modest selection of difficult nonlinearly constrained optimization problems from applications in optimal design, fluid dynamics, parameter estimation, and optimal control. In this report we describe version 2.0 of the COPS problems. The formulation and discretization of the original problems have been streamlined and improved. We have also added new problems. The presentation of COPS follows the original report, but the description of the problems has been streamlined. For each problem we discuss the formulation of the problem and the structural data in Table 0.1 on the formulation. The aim of presenting this data ismore » to provide an approximate idea of the size and sparsity of the problem. We also include the results of computational experiments with the LANCELOT, LOQO, MINOS, and SNOPT solvers. These computational experiments differ from the original results in that we have deleted problems that were considered to be too easy. Moreover, in the current version of the computational experiments, each problem is tested with four variations. An important difference between this report and the original report is that the tables that present the computational experiments are generated automatically from the testing script. This is explained in more detail in the report.« less
Evaluating the effects of real power losses in optimal power flow based storage integration
Castillo, Anya; Gayme, Dennice
2017-03-27
This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storagemore » (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.« less
Development of a special purpose spacecraft interior coating, phase 1
NASA Technical Reports Server (NTRS)
Bartoszek, E. J.; Nannelli, P.
1975-01-01
Coating formulations were developed consisting of latex blends of fluorocarbon polymers, acrylic resins, stabilizers, modifiers, other additives, and a variety of inorganic pigments. Suitable latex primers were also developed from an acrylic latex base. The formulations dried to touch in about one hour and were fully dry in about twenty-four hours under normal room temperature and humidity conditions. The resulting coatings displayed good optical and mechanical properties, including excellent bonding to (pre-treated) substrates. In addition, the preferred compositions were found to be self-extinguishing when applied to nonflammable substrates and could meet the offgassing requirements specified by NASA for the intended application. Improvements are needed in abrasion resistance and hardness.
Depth varying rupture properties during the 2015 Mw 7.8 Gorkha (Nepal) earthquake
NASA Astrophysics Data System (ADS)
Yue, Han; Simons, Mark; Duputel, Zacharie; Jiang, Junle; Fielding, Eric; Liang, Cunren; Owen, Susan; Moore, Angelyn; Riel, Bryan; Ampuero, Jean Paul; Samsonov, Sergey V.
2017-09-01
On April 25th 2015, the Mw 7.8 Gorkha (Nepal) earthquake ruptured a portion of the Main Himalayan Thrust underlying Kathmandu and surrounding regions. We develop kinematic slip models of the Gorkha earthquake using both a regularized multi-time-window (MTW) approach and an unsmoothed Bayesian formulation, constrained by static and high rate GPS observations, synthetic aperture radar (SAR) offset images, interferometric SAR (InSAR), and teleseismic body wave records. These models indicate that Kathmandu is located near the updip limit of fault slip and approximately 20 km south of the centroid of fault slip. Fault slip propagated unilaterally along-strike in an ESE direction for approximately 140 km with a 60 km cross-strike extent. The deeper portions of the fault are characterized by a larger ratio of high frequency (0.03-0.2 Hz) to low frequency slip than the shallower portions. From both the MTW and Bayesian results, we can resolve depth variations in slip characteristics, with higher slip roughness, higher rupture velocity, longer rise time and higher complexity of subfault source time functions in the deeper extents of the rupture. The depth varying nature of rupture characteristics suggests that the up-dip portions are characterized by relatively continuous rupture, while the down-dip portions may be better characterized by a cascaded rupture. The rupture behavior and the tectonic setting indicate that the earthquake may have ruptured both fully seismically locked and a deeper transitional portions of the collision interface, analogous to what has been seen in major subduction zone earthquakes.
NASA Astrophysics Data System (ADS)
Masternak, Tadeusz J.
This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.
Fully Nonlinear Modeling and Analysis of Precision Membranes
NASA Technical Reports Server (NTRS)
Pai, P. Frank; Young, Leyland G.
2003-01-01
High precision membranes are used in many current space applications. This paper presents a fully nonlinear membrane theory with forward and inverse analyses of high precision membrane structures. The fully nonlinear membrane theory is derived from Jaumann strains and stresses, exact coordinate transformations, the concept of local relative displacements, and orthogonal virtual rotations. In this theory, energy and Newtonian formulations are fully correlated, and every structural term can be interpreted in terms of vectors. Fully nonlinear ordinary differential equations (ODES) governing the large static deformations of known axisymmetric membranes under known axisymmetric loading (i.e., forward problems) are presented as first-order ODES, and a method for obtaining numerically exact solutions using the multiple shooting procedure is shown. A method for obtaining the undeformed geometry of any axisymmetric membrane with a known inflated geometry and a known internal pressure (i.e., inverse problems) is also derived. Numerical results from forward analysis are verified using results in the literature, and results from inverse analysis are verified using known exact solutions and solutions from the forward analysis. Results show that the membrane theory and the proposed numerical methods for solving nonlinear forward and inverse membrane problems are accurate.
Ion Beam Analysis of Diffusion in Diamondlike Carbon Films
NASA Astrophysics Data System (ADS)
Chaffee, Kevin Paul
The van de Graaf accelerator facility at Case Western Reserve University was developed into an analytical research center capable of performing Rutherford Backscattering Spectrometry, Elastic Recoil Detection Analysis for hydrogen profiling, Proton Enhanced Scattering, and ^4 He resonant scattering for ^{16 }O profiling. These techniques were applied to the study of Au, Na^+, Cs ^+, and H_2O water diffusion in a-C:H films. The results are consistent with the fully constrained network model of the microstructure as described by Angus and Jansen.
Russia’s Ambiguous Warfare and Implications for the U.S. Marine Corps
2015-05-01
presence no longer constrained by former legal agreements with Ukraine, it can fully utilize Crimea as a platform for power projection. The Russian...in Crimea will create a strong line of defense for the Russian homeland. Russia’s air defense systems in Crimea reach nearly half of the Black Sea...and its surface attack systems reach almost all of the Black Sea area. Historically, a Russian military build-up of this size on the northern shore
In Search of a Method. Workpapers in Teaching English as a Second Language, Vol. 9, June, 1975.
ERIC Educational Resources Information Center
Prator, Clifford H.
1974-01-01
Though the audiolingual approach has lost much of the support that it once enjoyed from methodologists and language teachers, no new method--fully formulated, coherent, and sufficiently in harmony with current developments in psychology and linguistics--has yet arisen to take its place. Many new directions in language teaching are apparent, most…
An efficient, explicit finite-rate algorithm to compute flows in chemical nonequilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1989-01-01
An explicit finite-rate code was developed to compute hypersonic viscous chemically reacting flows about three-dimensional bodies. Equations describing the finite-rate chemical reactions were fully coupled to the gas dynamic equations using a new coupling technique. The new technique maintains stability in the explicit finite-rate formulation while permitting relatively large global time steps.
Fully Adaptive Radar Modeling and Simulation Development
2017-04-01
Graeme E . Smith The Ohio State University Bruce L. McKinley Signal Processing Consultants, Inc. APRIL 2017 Final Report THIS IS A...AIR FORCE MATERIEL COMMAND UNITED STATES AIR FORCE NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data ...formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation; or convey any
Portent of Heine's Reciprocal Square Root Identity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohl, H W
Precise efforts in theoretical astrophysics are needed to fully understand the mechanisms that govern the structure, stability, dynamics, formation, and evolution of differentially rotating stars. Direct computation of the physical attributes of a star can be facilitated by the use of highly compact azimuthal and separation angle Fourier formulations of the Green's functions for the linear partial differential equations of mathematical physics.
Identification of Novel Fluorochemicals in Aqueous Film-Forming Foams (AFFF) Used by the US Military
Place, Benjamin J.; Field, Jennifer A.
2012-01-01
Aqueous film-forming foams (AFFFs) are a vital tool to fight large hydrocarbon fires and can be used by public, commercial, and military firefighting organizations. In order to possess these superior firefighting capabilities, AFFFs contain fluorochemical surfactants, of which many of the chemical identities are listed as proprietary. Large-scale controlled (e.g. training activities) and uncontrolled releases of AFFF have resulted in contamination of groundwater. Information on the composition of AFFF formulations is needed to fully define the extent of groundwater contamination and the first step is to fully define the fluorochemical composition of AFFFs used by the US military. Fast atom bombardment mass spectrometry (FAB-MS) and high resolution quadrupole-time-of-flight mass spectrometry (QTOF-MS) were combined to elucidate chemical formulas for the fluorochemicals in AFFF mixtures and, along with patent-based information, structures were assigned. Sample collection and analysis was focused on AFFFs that have been designated as certified for US military use. Ten different fluorochemical classes were identified in the seven military-certified AFFF formulations, and include anionic, cationic, and zwitterionic surfactants with perfluoroalkyl chain lengths ranging from 4-12. The environmental implications are discussed and research needs are identified. PMID:22681548
Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
1999-01-01
A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.
A method to stabilize linear systems using eigenvalue gradient information
NASA Technical Reports Server (NTRS)
Wieseman, C. D.
1985-01-01
Formal optimization methods and eigenvalue gradient information are used to develop a stabilizing control law for a closed loop linear system that is initially unstable. The method was originally formulated by using direct, constrained optimization methods with the constraints being the real parts of the eigenvalues. However, because of problems in trying to achieve stabilizing control laws, the problem was reformulated to be solved differently. The method described uses the Davidon-Fletcher-Powell minimization technique to solve an indirect, constrained minimization problem in which the performance index is the Kreisselmeier-Steinhauser function of the real parts of all the eigenvalues. The method is applied successfully to solve two different problems: the determination of a fourth-order control law stabilizes a single-input single-output active flutter suppression system and the determination of a second-order control law for a multi-input multi-output lateral-directional flight control system. Various sets of design variables and initial starting points were chosen to show the robustness of the method.
Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.
2017-06-01
Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.
NASA Astrophysics Data System (ADS)
Gupta, R. K.; Bhunia, A. K.; Roy, D.
2009-10-01
In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.
The Athena Astrophysical MHD Code in Cylindrical Geometry
NASA Astrophysics Data System (ADS)
Skinner, M. A.; Ostriker, E. C.
2011-10-01
We have developed a method for implementing cylindrical coordinates in the Athena MHD code (Skinner & Ostriker 2010). The extension has been designed to alter the existing Cartesian-coordinates code (Stone et al. 2008) as minimally and transparently as possible. The numerical equations in cylindrical coordinates are formulated to maintain consistency with constrained transport, a central feature of the Athena algorithm, while making use of previously implemented code modules such as the eigensystems and Riemann solvers. Angular-momentum transport, which is critical in astrophysical disk systems dominated by rotation, is treated carefully. We describe modifications for cylindrical coordinates of the higher-order spatial reconstruction and characteristic evolution steps as well as the finite-volume and constrained transport updates. Finally, we have developed a test suite of standard and novel problems in one-, two-, and three-dimensions designed to validate our algorithms and implementation and to be of use to other code developers. The code is suitable for use in a wide variety of astrophysical applications and is freely available for download on the web.
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2017-07-10
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2012-01-01
A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed
Improvements to Wire Bundle Thermal Modeling for Ampacity Determination
NASA Technical Reports Server (NTRS)
Rickman, Steve L.; Iannello, Christopher J.; Shariff, Khadijah
2017-01-01
Determining current carrying capacity (ampacity) of wire bundles in aerospace vehicles is critical not only to safety but also to efficient design. Published standards provide guidance on determining wire bundle ampacity but offer little flexibility for configurations where wire bundles of mixed gauges and currents are employed with varying external insulation jacket surface properties. Thermal modeling has been employed in an attempt to develop techniques to assist in ampacity determination for these complex configurations. Previous developments allowed analysis of wire bundle configurations but was constrained to configurations comprised of less than 50 elements. Additionally, for vacuum analyses, configurations with very low emittance external jackets suffered from numerical instability in the solution. A new thermal modeler is presented allowing for larger configurations and is not constrained for low bundle infrared emissivity calculations. Formulation of key internal radiation and interface conductance parameters is discussed including the effects of temperature and air pressure on wire to wire thermal conductance. Test cases comparing model-predicted ampacity and that calculated from standards documents are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Lee, Charles H.
2012-01-01
We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.
Chance-Constrained System of Systems Based Operation of Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kargarian, Amin; Fu, Yong; Wu, Hongyu
In this paper, a chance-constrained system of systems (SoS) based decision-making approach is presented for stochastic scheduling of power systems encompassing active distribution grids. Based on the concept of SoS, the independent system operator (ISO) and distribution companies (DISCOs) are modeled as self-governing systems. These systems collaborate with each other to run the entire power system in a secure and economic manner. Each self-governing system accounts for its local reserve requirements and line flow constraints with respect to the uncertainties of load and renewable energy resources. A set of chance constraints are formulated to model the interactions between the ISOmore » and DISCOs. The proposed model is solved by using analytical target cascading (ATC) method, a distributed optimization algorithm in which only a limited amount of information is exchanged between collaborative ISO and DISCOs. In this paper, a 6-bus and a modified IEEE 118-bus power systems are studied to show the effectiveness of the proposed algorithm.« less
Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms
Roald, Line Alnaes; Andersson, Goran
2017-08-29
Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less
Abd, Eman; Benson, Heather A. E.; Roberts, Michael S.; Grice, Jeffrey E.
2018-01-01
In this work, we examined enhanced skin delivery of minoxidil applied in nanoemulsions incorporating skin penetration enhancers. Aliquots of fully characterized oil-in-water nanoemulsions (1 mL), containing minoxidil (2%) and the skin penetration enhancer oleic acid or eucalyptol as oil phases, were applied to full-thickness excised human skin in Franz diffusion cells, while aqueous solutions (1 mL) containing minoxidil were used as controls. Minoxidil in the stratum corneum (SC), hair follicles, deeper skin layers, and flux through the skin over 24 h was determined, as well as minoxidil solubility in the formulations and in the SC. The nanoemulsions significantly enhanced the permeation of minoxidil through skin compared with control solutions. The eucalyptol formulations (NE) promoted minoxidil retention in the SC and deeper skin layers more than did the oleic acid formulations, while the oleic acid formulations (NO) gave the greatest hair follicle penetration. Minoxidil maximum flux enhancement was associated with increases in both minoxidil SC solubility and skin diffusivity in both nanoemulsion systems. The mechanism of enhancement appeared to be driven largely by increased diffusivity, rather than increased partitioning into the stratum corneum, supporting the concept of enhanced fluidity and disruption of stratum corneum lipids. PMID:29370122
Kelly, H M; Deasy, P B; Busquet, M; Torrance, A A
2004-07-08
Xerostomia is commonly known as 'dry mouth' and is characterised by a reduction or loss in salivary production. A bioadhesive gel for its localised treatment was formulated to help enhance the residence time of the product, based on the polymer Carbopol 974P. The bioadhesion of various formulations was evaluated on different mucosal substrates, as simulations of the oral mucosa of xerostomic patients. Depending on the type of model substrate used, the mechanism of bioadhesion could alter. When the rheology of various formulations was examined, changes in bioadhesion were more easily interpreted, as the presence of other excipients caused an alteration in the rheological profile, with a change from a fully expanded and partially cross-linked system to an entangled system. Improving the lubricity of the product was considered important, with optimum incorporation of vegetable oil causing a desirable lowering of the observed friction of the product. The final complex formulation developed also contained salivary levels of electrolytes to help remineralisation of teeth, fluoride to prevent caries, zinc to enhance taste sensation, triclosan as the main anti-microbial/anti-inflammatory agent and non-cariogenic sweeteners with lemon flavour to increase the palatability of the product while stimulating any residual salivary function.
Altomare, Christopher; Kinzler, Eric R; Buchhalter, August R; Cone, Edward J; Costantino, Anthony
The US Food and Drug Administration (FDA) considers the development of abuse-deterrent formulations of solid oral dosage forms a public health priority and has outlined a series of premarket studies that should be performed prior to submitting an application to the Agency. Category 1 studies are performed to characterize whether the abuse-deterrent properties of a new formulation can be easily defeated. Study protocols are designed to evaluate common abuse patterns of prescription medications as well as more advanced methods that have been reported on drug abuse websites and forums. Because FDA believes Category 1 testing should fully characterize the abuse-deterrent characteristics of an investigational formulation, Category 1 testing is time consuming and requires specialized laboratory resources as well as advanced knowledge of prescription medication abuse. Recent Advisory Committee meetings at FDA have shown that Category 1 tests play a critical role in FDA's evaluation of an investigational formulation. In this article, we will provide a general overview of the methods of manipulation and routes of administration commonly utilized by prescription drug abusers, how those methods and routes are evaluated in a laboratory setting, and discuss data intake, analysis, and reporting to satisfy FDA's Category 1 testing requirements.
Development of Stable Liquid Glucagon Formulations for Use in Artificial Pancreas
Li, Ming; Krasner, Alan; De Souza, Errol
2014-01-01
Background: A promising approach to treat diabetes is the development of fully automated artificial/bionic pancreas systems that use both insulin and glucagon to maintain euglycemia. A physically and chemically stable liquid formulation of glucagon does not currently exist. Our goal is to develop a glucagon formulation that is stable as a clear and gel-free solution, free of fibrils and that has the requisite long-term shelf life for storage in the supply chain, short-term stability for at least 7 days at 37°C, and pump compatibility for use in a bihormonal pump. Methods: We report the development of two distinct families of stable liquid glucagon formulations which utilize surfactant or surfactant-like excipients (LMPC and DDM) to “immobilize” the glucagon in solution potentially through the formation of micelles and prevention of interaction between glucagon molecules. Results: Data are presented that demonstrate long-term physical and chemical stability (~2 years) at 5°C, short-term stability (up to 1 month) under accelerated 37°C testing conditions, pump compatibility for up to 9 days, and adequate glucose responses in dogs and diabetic swine. Conclusions: These stable glucagon formulations show utility and promise for further development in artificial pancreas systems. PMID:25352634
Onuki, Yoshinori; Horita, Akihiro; Kuribayashi, Hideto; Okuno, Yoshihide; Obata, Yasuko; Takayama, Kozo
2014-07-01
A non-destructive method for monitoring creaming of emulsion-based formulations is in great demand because it allows us to understand fully their instability mechanisms. This study was aimed at demonstrating the usefulness of magnetic resonance (MR) techniques, including MR imaging (MRI) and MR spectroscopy (MRS), for evaluating the physicochemical stability of emulsion-based formulations. Emulsions that are applicable as the base of practical skin creams were used as test samples. Substantial creaming was developed by centrifugation, which was then monitored by MRI. The creaming oil droplet layer and aqueous phase were clearly distinguished by quantitative MRI by measuring T1 and the apparent diffusion coefficient. Components in a selected volume in the emulsions could be analyzed using MRS. Then, model emulsions having different hydrophilic-lipophilic balance (HLB) values were tested, and the optimal HLB value for a stable dispersion was determined. In addition, the MRI examination enables the detection of creaming occurring in a polyethylene tube, which is commonly used for commercial products, without losing any image quality. These findings strongly indicate that MR techniques are powerful tools to evaluate the physicochemical stability of emulsion-based formulations. This study will make a great contribution to the development and quality control of emulsion-based formulations.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.
2007-01-01
The strain formulation in elasticity and the compatibility condition in structural mechanics have neither been understood nor have they been utilized. This shortcoming prevented the formulation of a direct method to calculate stress. We have researched and understood the compatibility condition for linear problems in elasticity and in finite element analysis. This has lead to the completion of the method of force with stress (or stress resultant) as the primary unknown. The method in elasticity is referred to as the completed Beltrami-Michell formulation (CBMF), and it is the integrated force method (IFM) in structures. The dual integrated force method (IFMD) with displacement as the primary unknown has been formulated. IFM and IFMD produce identical responses. The variational derivation of the CBMF yielded the new boundary compatibility conditions. The CBMF can be used to solve stress, displacement, and mixed boundary value problems. The IFM in structures produced high-fidelity response even with a modest finite element model. The IFM has influenced structural design considerably. A fully utilized design method for strength and stiffness limitation has been developed. The singularity condition in optimization has been identified. The CBMF and IFM tensorial approaches are robust formulations because of simultaneous emphasis on the equilibrium equation and the compatibility condition.
NASA Astrophysics Data System (ADS)
He, Y.; Xiaohong, C.; Lin, K.; Wang, Z.
2016-12-01
Water demand (WD) is the basis for water allocation (WA) because it can fully reflect the pressure on water resources from population and socioeconomic development. To deal with the great uncertainties and the absence of consideration of water environmental capacity (WEC) in traditional water demand prediction methods, e.g. Statistical models, System Dynamics and quota method, this study develops a two-stage approach to predict WD under constrained total water use from the perspective of ecological restraint. Regional total water demand (RTWD) is constrained by WEC, available water resources amount and total water use quota. Based on RTWD, WD is allocated in two stages according to the game theory, including predicting sub regional total water demand (SRWD) by calculating the sub region weights based on the selected indicators of socioeconomic development and predicting industrial water demand (IWD) according to the game theory. Taking the Dongjiang river basin, South China as an example of WD prediction, according to its constrained total water use quota and WEC, RTWD in 2020 is 9.83 billion m3, and IWD for agriculture, industry, service, ecology (off-stream), and domesticity are 2.32 billion m3, 3.79 billion m3, 0.75 billion m3 , 0.18 billion m3and 1.79 billion m3 respectively. The results from this study provide useful insights for effective water allocation under climate change and the strict policy of water resources management.
Free vibration of fully functionally graded carbon nanotube reinforced graphite/epoxy laminates
NASA Astrophysics Data System (ADS)
Kuo, Shih-Yao
2018-03-01
This study provides the first-known vibration analysis of fully functionally graded carbon nanotube reinforced hybrid composite (FFG-CNTRHC) laminates. CNTs are non-uniformly distributed to reinforce the graphite/epoxy laminates. Some CNT distribution functions in the plane and thickness directions are proposed to more efficiently increase the stiffening effect. The rule of mixtures is modified by considering the non-homogeneous material properties of FFG-CNTRHC laminates. The formulation of the location dependent stiffness matrix and mass matrix is derived. The effects of CNT volume fraction and distribution on the natural frequencies of FFG-CNTRHC laminates are discussed. The results reveal that the FFG layout may significantly increase the natural frequencies of FFG-CNTRHC laminate.
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.; Lakshmi, V.
2017-12-01
Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.
NASA Astrophysics Data System (ADS)
Gerstmayr, Johannes; Irschik, Hans
2008-12-01
In finite element methods that are based on position and slope coordinates, a representation of axial and bending deformation by means of an elastic line approach has become popular. Such beam and plate formulations based on the so-called absolute nodal coordinate formulation have not yet been verified sufficiently enough with respect to analytical results or classical nonlinear rod theories. Examining the existing planar absolute nodal coordinate element, which uses a curvature proportional bending strain expression, it turns out that the deformation does not fully agree with the solution of the geometrically exact theory and, even more serious, the normal force is incorrect. A correction based on the classical ideas of the extensible elastica and geometrically exact theories is applied and a consistent strain energy and bending moment relations are derived. The strain energy of the solid finite element formulation of the absolute nodal coordinate beam is based on the St. Venant-Kirchhoff material: therefore, the strain energy is derived for the latter case and compared to classical nonlinear rod theories. The error in the original absolute nodal coordinate formulation is documented by numerical examples. The numerical example of a large deformation cantilever beam shows that the normal force is incorrect when using the previous approach, while a perfect agreement between the absolute nodal coordinate formulation and the extensible elastica can be gained when applying the proposed modifications. The numerical examples show a very good agreement of reference analytical and numerical solutions with the solutions of the proposed beam formulation for the case of large deformation pre-curved static and dynamic problems, including buckling and eigenvalue analysis. The resulting beam formulation does not employ rotational degrees of freedom and therefore has advantages compared to classical beam elements regarding energy-momentum conservation.
Fully implicit Particle-in-cell algorithms for multiscale plasma simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis
The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PICmore » only, reduced dimensionality). The approach is free of numerical instabilities: ω peΔt >> 1, and Δx >> λ D. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N FE, leading to an optimal algorithm.« less
NASA Astrophysics Data System (ADS)
Kelly, R.; Andrews, T.; Dietze, M.
2015-12-01
Shifts in ecological communities in response to environmental change have implications for biodiversity, ecosystem function, and feedbacks to global climate change. Community composition is fundamentally the product of demography, but demographic processes are simplified or missing altogether in many ecosystem, Earth system, and species distribution models. This limitation arises in part because demographic data are noisy and difficult to synthesize. As a consequence, demographic processes are challenging to formulate in models in the first place, and to verify and constrain with data thereafter. Here, we used a novel analysis of the USFS Forest Inventory Analysis to improve the representation of demography in an ecosystem model. First, we created an Empirical Succession Mapping (ESM) based on ~1 million individual tree observations from the eastern U.S. to identify broad demographic patterns related to forest succession and disturbance. We used results from this analysis to guide reformulation of the Ecosystem Demography model (ED), an existing forest simulator with explicit tree demography. Results from the ESM reveal a coherent, cyclic pattern of change in temperate forest tree size and density over the eastern U.S. The ESM captures key ecological processes including succession, self-thinning, and gap-filling, and quantifies the typical trajectory of these processes as a function of tree size and stand density. Recruitment is most rapid in early-successional stands with low density and mean diameter, but slows as stand density increases; mean diameter increases until thinning promotes recruitment of small-diameter trees. Strikingly, the upper bound of size-density space that emerges in the ESM conforms closely to the self-thinning power law often observed in ecology. The ED model obeys this same overall size-density boundary, but overestimates plot-level growth, mortality, and fecundity rates, leading to unrealistic emergent demographic patterns. In particular, the current ED formulation cannot capture steady state dynamics evident in the ESM. Ongoing efforts are aimed at reformulating ED to more closely approach overall forest dynamics evident in the ESM, and then assimilating inventory data to constrain model parameters and initial conditions.
Non-Parabolic Hydrodynamic Formulations for the Simulation of Inhomogeneous Semiconductor Devices
NASA Technical Reports Server (NTRS)
Smith, A. W.; Brennan, K. F.
1996-01-01
Hydrodynamic models are becoming prevalent design tools for small scale devices and other devices in which high energy effects can dominate transport. Most current hydrodynamic models use a parabolic band approximation to obtain fairly simple conservation equations. Interest in accounting for band structure effects in hydrodynamic device simulation has begun to grow since parabolic models cannot fully describe the transport in state of the art devices due to the distribution populating non-parabolic states within the band. This paper presents two different non-parabolic formulations or the hydrodynamic model suitable for the simulation of inhomogeneous semiconductor devices. The first formulation uses the Kane dispersion relationship ((hk)(exp 2)/2m = W(1 + alphaW). The second formulation makes use of a power law ((hk)(exp 2)/2m = xW(exp y)) for the dispersion relation. Hydrodynamic models which use the first formulation rely on the binomial expansion to obtain moment equations with closed form coefficients. This limits the energy range over which the model is valid. The power law formulation readily produces closed form coefficients similar to those obtained using the parabolic band approximation. However, the fitting parameters (x,y) are only valid over a limited energy range. The physical significance of the band non-parabolicity is discussed as well as the advantages/disadvantages and approximations of the two non-parabolic models. A companion paper describes device simulations based on the three dispersion relationships; parabolic, Kane dispersion and power law dispersion.
Nanocrystal Additives for Advanced Lubricants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, Gregory; Lohuis, James; Demas, Nicholaos
The innovations in engine and drivetrain lubricants are mainly driven by ever more stringent regulations, which demand better fuel economy, lower carbon emission, and less pollution. Many technologies are being developed for the next generations of vehicles to achieve these goals. Even if these technologies can be adopted, there still is a significant need for a “drop-in” lubricant solution for the existing ground vehicle fleet to reap immediate fuel savings at the same time reduce the pollution. Dramatic improvements were observed when Pixelligent’s proprietary, mono-dispersed, and highly scalable metal oxide nanocrystals were added to the base oils. The dispersions inmore » base and formulated oils are clear and without any change of appearance and viscosity. However, the benefits provided by the nanocrystals were limited to the base oils due to the interference of exiting additives in the fully formulated oils. Developing a prototype formulation including the nanocrystals that can demonstrate the same improvements observed in the base oils is a critical step toward the commercialization of these advanced nano-additives. A ‘bottom-up’ approach was adopted to develop a prototype lubricant formulation to avoid the complicated interactions with the multitude of additives, only minimal numbers of most essential additives are added, step by step, into the formulation, to ensure that they are compatible with the nanocrystals and do not compromise their tribological performance. Tribological performance are characterized to come up with the best formulations that can demonstrate the commercial potential of the nano-additives.« less
Non-parabolic hydrodynamic formulations for the simulation of inhomogeneous semiconductor devices
NASA Technical Reports Server (NTRS)
Smith, Arlynn W.; Brennan, Kevin F.
1995-01-01
Hydrodynamic models are becoming prevalent design tools for small scale devices and other devices in which high energy effects can dominate transport. Most current hydrodynamic models use a parabolic band approximation to obtain fairly simple conservation equations. Interest in accounting for band structure effects in hydrodynamic device simulation has begun to grow since parabolic models can not fully describe the transport in state of the art devices due to the distribution populating non-parabolic states within the band. This paper presents two different non-parabolic formulations of the hydrodynamic model suitable for the simulation of inhomogeneous semiconductor devices. The first formulation uses the Kane dispersion relationship (hk)(exp 2)/2m = W(1 + alpha(W)). The second formulation makes use of a power law ((hk)(exp 2)/2m = xW(sup y)) for the dispersion relation. Hydrodynamic models which use the first formulation rely on the binomial expansion to obtain moment equations with closed form coefficients. This limits the energy range over which the model is valid. The power law formulation readily produces closed form coefficients similar to those obtained using the parabolic band approximation. However, the fitting parameters (x,y) are only valid over a limited energy range. The physical significance of the band non-parabolicity is discussed as well as the advantages/disadvantages and approximations of the two non-parabolic models. A companion paper describes device simulations based on the three dispersion relationships: parabolic, Kane dispersion, and power low dispersion.
ERIC Educational Resources Information Center
Rushton, J. Philippe
2004-01-01
First, I describe why intelligence (Spearman's "g") can only be fully understood through "r-K" theory, which places it into an evolutionary framework along with brain size, longevity, maturation speed, and several other life-history traits. The "r-K" formulation explains why IQ predicts longevity and also why the gap in mortality rates between…
NASA Astrophysics Data System (ADS)
Le Hardy, D.; Favennec, Y.; Rousseau, B.
2016-08-01
The 2D radiative transfer equation coupled with specular reflection boundary conditions is solved using finite element schemes. Both Discontinuous Galerkin and Streamline-Upwind Petrov-Galerkin variational formulations are fully developed. These two schemes are validated step-by-step for all involved operators (transport, scattering, reflection) using analytical formulations. Numerical comparisons of the two schemes, in terms of convergence rate, reveal that the quadratic SUPG scheme proves efficient for solving such problems. This comparison constitutes the main issue of the paper. Moreover, the solution process is accelerated using block SOR-type iterative methods, for which the determination of the optimal parameter is found in a very cheap way.
Nonlinear static and dynamic analysis of beam structures using fully intrinsic equations
NASA Astrophysics Data System (ADS)
Sotoudeh, Zahra
2011-07-01
Beams are structural members with one dimension much larger than the other two. Examples of beams include propeller blades, helicopter rotor blades, and high aspect-ratio aircraft wings in aerospace engineering; shafts and wind turbine blades in mechanical engineering; towers, highways and bridges in civil engineering; and DNA modeling in biomedical engineering. Beam analysis includes two sets of equations: a generally linear two-dimensional problem over the cross-sectional plane and a nonlinear, global one-dimensional analysis. This research work deals with a relatively new set of equations for one-dimensional beam analysis, namely the so-called fully intrinsic equations. Fully intrinsic equations comprise a set of geometrically exact, nonlinear, first-order partial differential equations that is suitable for analyzing initially curved and twisted anisotropic beams. A fully intrinsic formulation is devoid of displacement and rotation variables, making it especially attractive because of the absence of singularities, infinite-degree nonlinearities, and other undesirable features associated with finite rotation variables. In spite of the advantages of these equations, using them with certain boundary conditions presents significant challenges. This research work will take a broad look at these challenges of modeling various boundary conditions when using the fully intrinsic equations. Hopefully it will clear the path for wider and easier use of the fully intrinsic equations in future research. This work also includes application of fully intrinsic equations in structural analysis of joined-wing aircraft, different rotor blade configuration and LCO analysis of HALE aircraft.
Rigid body formulation in a finite element context with contact interaction
NASA Astrophysics Data System (ADS)
Refachinho de Campos, Paulo R.; Gay Neto, Alfredo
2018-03-01
The present work proposes a formulation to employ rigid bodies together with flexible bodies in the context of a nonlinear finite element solver, with contact interactions. Inertial contributions due to distribution of mass of a rigid body are fully developed, considering a general pole position associated with a single node, representing a rigid body element. Additionally, a mechanical constraint is proposed to connect a rigid region composed by several nodes, which is useful for linking rigid/flexible bodies in a finite element environment. Rodrigues rotation parameters are used to describe finite rotations, by an updated Lagrangian description. In addition, the contact formulation entitled master-surface to master-surface is employed in conjunction with the rigid body element and flexible bodies, aiming to consider their interaction in a rigid-flexible multibody environment. New surface parameterizations are presented to establish contact pairs, permitting pointwise interaction in a frictional scenario. Numerical examples are provided to show robustness and applicability of the methods.
Primal-mixed formulations for reaction-diffusion systems on deforming domains
NASA Astrophysics Data System (ADS)
Ruiz-Baier, Ricardo
2015-10-01
We propose a finite element formulation for a coupled elasticity-reaction-diffusion system written in a fully Lagrangian form and governing the spatio-temporal interaction of species inside an elastic, or hyper-elastic body. A primal weak formulation is the baseline model for the reaction-diffusion system written in the deformed domain, and a finite element method with piecewise linear approximations is employed for its spatial discretization. On the other hand, the strain is introduced as mixed variable in the equations of elastodynamics, which in turn acts as coupling field needed to update the diffusion tensor of the modified reaction-diffusion system written in a deformed domain. The discrete mechanical problem yields a mixed finite element scheme based on row-wise Raviart-Thomas elements for stresses, Brezzi-Douglas-Marini elements for displacements, and piecewise constant pressure approximations. The application of the present framework in the study of several coupled biological systems on deforming geometries in two and three spatial dimensions is discussed, and some illustrative examples are provided and extensively analyzed.
Low Mach number fluctuating hydrodynamics for electrolytes
NASA Astrophysics Data System (ADS)
Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.
2016-11-01
We formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids [A. Donev et al., Phys. Fluids 27, 037103 (2015), 10.1063/1.4913571], we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the mass and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. We demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second order in the deterministic setting and for length scales much greater than the Debye length gives results consistent with an electroneutral approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.
NASA Astrophysics Data System (ADS)
Martin, Alexandre; Torrent, Marc; Caracas, Razvan
2015-03-01
A formulation of the response of a system to strain and electric field perturbations in the pseudopotential-based density functional perturbation theory (DFPT) has been proposed by D.R Hamman and co-workers. It uses an elegant formalism based on the expression of DFT total energy in reduced coordinates, the key quantity being the metric tensor and its first and second derivatives. We propose to extend this formulation to the Projector Augmented-Wave approach (PAW). In this context, we express the full elastic tensor including the clamped-atom tensor, the atomic-relaxation contributions (internal stresses) and the response to electric field change (piezoelectric tensor and effective charges). With this we are able to compute the elastic tensor for all materials (metals and insulators) within a fully analytical formulation. The comparison with finite differences calculations on simple systems shows an excellent agreement. This formalism has been implemented in the plane-wave based DFT ABINIT code. We apply it to the computation of elastic properties and seismic-wave velocities of iron with impurity elements. By analogy with the materials contained in meteorites, tested impurities are light elements (H, O, C, S, Si).
NASA Technical Reports Server (NTRS)
Weisbin, C. R. (Editor)
2004-01-01
A workshop entitled, "Outstanding Research Issues in Systematic Technology Prioritization for New Space Missions," was convened on April 21-22, 2004 in San Diego, California to review the status of methods for objective resource allocation, to discuss the research barriers remaining, and to formulate recommendations for future development and application. The workshop explored the state-of-the-art in decision analysis in the context of being able to objectively allocate constrained technical resources to enable future space missions and optimize science return. This article summarizes the highlights of the meeting results.
NASA Technical Reports Server (NTRS)
Chen, Guanrong
1991-01-01
An optimal trajectory planning problem for a single-link, flexible joint manipulator is studied. A global feedback-linearization is first applied to formulate the nonlinear inequality-constrained optimization problem in a suitable way. Then, an exact and explicit structural formula for the optimal solution of the problem is derived and the solution is shown to be unique. It turns out that the optimal trajectory planning and control can be done off-line, so that the proposed method is applicable to both theoretical analysis and real time tele-robotics control engineering.
Constrained Burn Optimization for the International Space Station
NASA Technical Reports Server (NTRS)
Brown, Aaron J.; Jones, Brandon A.
2017-01-01
In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.
Homotopy Algorithm for Fixed Order Mixed H2/H(infinity) Design
NASA Technical Reports Server (NTRS)
Whorton, Mark; Buschek, Harald; Calise, Anthony J.
1996-01-01
Recent developments in the field of robust multivariable control have merged the theories of H-infinity and H-2 control. This mixed H-2/H-infinity compensator formulation allows design for nominal performance by H-2 norm minimization while guaranteeing robust stability to unstructured uncertainties by constraining the H-infinity norm. A key difficulty associated with mixed H-2/H-infinity compensation is compensator synthesis. A homotopy algorithm is presented for synthesis of fixed order mixed H-2/H-infinity compensators. Numerical results are presented for a four disk flexible structure to evaluate the efficiency of the algorithm.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2012-01-01
In this paper, we propose to solve the constrained optimization problem in two phases. The first phase uses heuristic methods such as the ant colony method, particle swarming optimization, and genetic algorithm to seek a near optimal solution among a list of feasible initial populations. The final optimal solution can be found by using the solution of the first phase as the initial condition to the SQP algorithm. We demonstrate the above problem formulation and optimization schemes with a large-scale network that includes the DSN ground stations and a number of spacecraft of deep space missions.
Friction damping of two-dimensional motion and its application in vibration control
NASA Technical Reports Server (NTRS)
Menq, C.-H.; Chidamparam, P.; Griffin, J. H.
1991-01-01
This paper presents an approximate method for analyzing the two-dimensional friction contact problem so as to compute the dynamic response of a structure constrained by friction interfaces. The friction force at the joint is formulated based on the Coulomb model. The single-term harmonic balance scheme, together with the receptance approach of decoupling the effect of the friction force on the structure from those of the external forces has been utilized to obtain the steady state response. The computational efficiency and accuracy of the method are demonstrated by comparing the results with long-term time solutions.
Matrix Transfer Function Design for Flexible Structures: An Application
NASA Technical Reports Server (NTRS)
Brennan, T. J.; Compito, A. V.; Doran, A. L.; Gustafson, C. L.; Wong, C. L.
1985-01-01
The application of matrix transfer function design techniques to the problem of disturbance rejection on a flexible space structure is demonstrated. The design approach is based on parameterizing a class of stabilizing compensators for the plant and formulating the design specifications as a constrained minimization problem in terms of these parameters. The solution yields a matrix transfer function representation of the compensator. A state space realization of the compensator is constructed to investigate performance and stability on the nominal and perturbed models. The application is made to the ACOSSA (Active Control of Space Structures) optical structure.
Evaluating competing forces constraining glacial grounding-line stability (Invited)
NASA Astrophysics Data System (ADS)
Powell, R. D.
2013-12-01
Stability of grounding lines of marine-terminating glaciers and ice sheets is of concern due to their importance in governing rates of ice mass loss and consequent sea level rise during global warming. Although processes are similar at tidewater and floating grounding zones their relative magnitudes in terms of their influence on grounding-line stability vary between these two end members. Processes considered Important for this discussion are ice dynamics, ice surface melting and crevassing, ocean dynamics, subglacial sediment and water dynamics, and subglacial bed geometries. Models have continued to improve in their representation of these complex interactions but reliable field measurements and data continue to be hard earned and too few to properly constrain the range of boundary conditions in this complicated system. Some data will be presented covering a range of regimes from Alaska, Svalbard and Antarctica. Certainly more data are required on subglacial sediment/water dynamics and fluxes to fully represent the spectrum of glacial regimes and to assess the significance of grounding-zone sediment systems in counteracting the other processes to force grounding-line stability. Especially important here is constraining the duration of the stability that could be maintained by sediment flux - present data appear to show that it is likely to be a limited period.
Constraining neutron-star tidal Love numbers with gravitational-wave detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flanagan, Eanna E.; Hinderer, Tanja
Ground-based gravitational wave detectors may be able to constrain the nuclear equation of state using the early, low frequency portion of the signal of detected neutron star-neutron star inspirals. In this early adiabatic regime, the influence of a neutron star's internal structure on the phase of the waveform depends only on a single parameter {lambda} of the star related to its tidal Love number, namely, the ratio of the induced quadrupole moment to the perturbing tidal gravitational field. We analyze the information obtainable from gravitational wave frequencies smaller than a cutoff frequency of 400 Hz, where corrections to the internal-structuremore » signal are less than 10%. For an inspiral of two nonspinning 1.4M{sub {center_dot}} neutron stars at a distance of 50 Megaparsecs, LIGO II detectors will be able to constrain {lambda} to {lambda}{<=}2.0x10{sup 37} g cm{sup 2} s{sup 2} with 90% confidence. Fully relativistic stellar models show that the corresponding constraint on radius R for 1.4M{sub {center_dot}} neutron stars would be R{<=}13.6 km (15.3 km) for a n=0.5 (n=1.0) polytrope with equation of state p{proportional_to}{rho}{sup 1+1/n}.« less
Consideration of plant behaviour in optimal servo-compensator design
NASA Astrophysics Data System (ADS)
Moase, W. H.; Manzie, C.
2016-07-01
Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.
Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhao, Changhong; Zamzam, Admed S.
This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successivemore » convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.« less
An introduction to optimal power flow: Theory, formulation, and examples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Stephen; Rebennack, Steffen
The set of optimization problems in electric power systems engineering known collectively as Optimal Power Flow (OPF) is one of the most practically important and well-researched subfields of constrained nonlinear optimization. OPF has enjoyed a rich history of research, innovation, and publication since its debut five decades ago. Nevertheless, entry into OPF research is a daunting task for the uninitiated--both due to the sheer volume of literature and because OPF's ubiquity within the electric power systems community has led authors to assume a great deal of prior knowledge that readers unfamiliar with electric power systems may not possess. This articlemore » provides an introduction to OPF from an operations research perspective; it describes a complete and concise basis of knowledge for beginning OPF research. The discussion is tailored for the operations researcher who has experience with nonlinear optimization but little knowledge of electrical engineering. Topics covered include power systems modeling, the power flow equations, typical OPF formulations, and common OPF extensions.« less
The effect of parking orbit constraints on the optimization of ballistic planetary trajectories
NASA Technical Reports Server (NTRS)
Sauer, C. G., Jr.
1984-01-01
The optimization of ballistic planetary trajectories is developed which includes constraints on departure parking orbit inclination and node. This problem is formulated to result in a minimum total Delta V where the entire constrained injection Delta V is included in the optimization. An additional Delta V is also defined to allow for possible optimization of parking orbit inclination when the launch vehicle orbit capability varies as a function of parking orbit inclination. The optimization problem is formulated using primer vector theory to derive partial derivatives of total Delta V with respect to possible free parameters. Minimization of total Delta V is accomplished using a quasi-Newton gradient search routine. The analysis is applied to an Eros rendezvous mission whose transfer trajectories are characterized by high values of launch asymptote declination during particular launch opportunities. Comparisons in performance are made between trajectories where parking orbit constraints are included in the optimization and trajectories where the constraints are not included.
A simple orbit-attitude coupled modelling method for large solar power satellites
NASA Astrophysics Data System (ADS)
Li, Qingjun; Wang, Bo; Deng, Zichen; Ouyang, Huajiang; Wei, Yi
2018-04-01
A simple modelling method is proposed to study the orbit-attitude coupled dynamics of large solar power satellites based on natural coordinate formulation. The generalized coordinates are composed of Cartesian coordinates of two points and Cartesian components of two unitary vectors instead of Euler angles and angular velocities, which is the reason for its simplicity. Firstly, in order to develop natural coordinate formulation to take gravitational force and gravity gradient torque of a rigid body into account, Taylor series expansion is adopted to approximate the gravitational potential energy. The equations of motion are constructed through constrained Hamilton's equations. Then, an energy- and constraint-conserving algorithm is presented to solve the differential-algebraic equations. Finally, the proposed method is applied to simulate the orbit-attitude coupled dynamics and control of a large solar power satellite considering gravity gradient torque and solar radiation pressure. This method is also applicable to dynamic modelling of other rigid multibody aerospace systems.
Rate-independent dissipation in phase-field modelling of displacive transformations
NASA Astrophysics Data System (ADS)
Tůma, K.; Stupkiewicz, S.; Petryk, H.
2018-05-01
In this paper, rate-independent dissipation is introduced into the phase-field framework for modelling of displacive transformations, such as martensitic phase transformation and twinning. The finite-strain phase-field model developed recently by the present authors is here extended beyond the limitations of purely viscous dissipation. The variational formulation, in which the evolution problem is formulated as a constrained minimization problem for a global rate-potential, is enhanced by including a mixed-type dissipation potential that combines viscous and rate-independent contributions. Effective computational treatment of the resulting incremental problem of non-smooth optimization is developed by employing the augmented Lagrangian method. It is demonstrated that a single Lagrange multiplier field suffices to handle the dissipation potential vertex and simultaneously to enforce physical constraints on the order parameter. In this way, the initially non-smooth problem of evolution is converted into a smooth stationarity problem. The model is implemented in a finite-element code and applied to solve two- and three-dimensional boundary value problems representative for shape memory alloys.
Matrix methods applied to engineering rigid body mechanics
NASA Astrophysics Data System (ADS)
Crouch, T.
The purpose of this book is to present the solution of a range of rigorous body mechanics problems using a matrix formulation of vector algebra. Essential theory concerning kinematics and dynamics is formulated in terms of matrix algebra. The solution of kinematics and dynamics problems is discussed, taking into account the velocity and acceleration of a point moving in a circular path, the velocity and acceleration determination for a linkage, the angular velocity and angular acceleration of a roller in a taper-roller thrust race, Euler's theroem on the motion of rigid bodies, an automotive differential, a rotating epicyclic, the motion of a high speed rotor mounted in gimbals, and the vibration of a spinning projectile. Attention is given to the activity of a force, the work done by a conservative force, the work and potential in a conservative system, the equilibrium of a mechanism, bearing forces due to rotor misalignment, and the frequency of vibrations of a constrained rod.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu; Phanish, Deepa
We present an Augmented Lagrangian formulation and its real-space implementation for non-periodic Orbital-Free Density Functional Theory (OF-DFT) calculations. In particular, we rewrite the constrained minimization problem of OF-DFT as a sequence of minimization problems without any constraint, thereby making it amenable to powerful unconstrained optimization algorithms. Further, we develop a parallel implementation of this approach for the Thomas–Fermi–von Weizsacker (TFW) kinetic energy functional in the framework of higher-order finite-differences and the conjugate gradient method. With this implementation, we establish that the Augmented Lagrangian approach is highly competitive compared to the penalty and Lagrange multiplier methods. Additionally, we show that higher-ordermore » finite-differences represent a computationally efficient discretization for performing OF-DFT simulations. Overall, we demonstrate that the proposed formulation and implementation are both efficient and robust by studying selected examples, including systems consisting of thousands of atoms. We validate the accuracy of the computed energies and forces by comparing them with those obtained by existing plane-wave methods.« less
Gaussian process regression for sensor networks under localization uncertainty
Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming
2013-01-01
In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
A summary analysis of the 3rd inquiry.
1977-01-01
20 ESCAP member countries responded to the "Third Population Inquiry among Governments: Population policies in the context of development in 1976." The questionnaire sent to the member countries covered economic and social development and population growth, mortality, fertility and family formation, population distribution and internal migration, international migration, population data collection and research, training, and institutional arrangements for the formulation of population policies within development. Most of the governments in the ESCAP region that responded indicate that the present rate of population growth constrains their social and economic development. Among the governments that consider the present rate of population growth to constrain economic and social development, 13 countries regarded the most appropriate response to the constraint would include an adjustment of both socioeconomic and demographic factors. 11 of the governments regarded their present levels of average life expectancy at birth "acceptable" and 7 identified their levels as "unacceptable." Most of the governments who responded consider that, in general, their present level of fertility is too high and constrains family well-being. Internal migration and population distribution are coming to be seen as concerns for government population policy. The most popular approaches to distributing economic and social activities are rural development, urban and regional development and industrial dispersion. There was much less concern among the governments returning the questionnaire about the effect of international migration than internal migration on social and economic development.
Dynamic Financial Constraints: Distinguishing Mechanism Design from Exogenously Incomplete Regimes*
Karaivanov, Alexander; Townsend, Robert M.
2014-01-01
We formulate and solve a range of dynamic models of constrained credit/insurance that allow for moral hazard and limited commitment. We compare them to full insurance and exogenously incomplete financial regimes (autarky, saving only, borrowing and lending in a single asset). We develop computational methods based on mechanism design, linear programming, and maximum likelihood to estimate, compare, and statistically test these alternative dynamic models with financial/information constraints. Our methods can use both cross-sectional and panel data and allow for measurement error and unobserved heterogeneity. We estimate the models using data on Thai households running small businesses from two separate samples. We find that in the rural sample, the exogenously incomplete saving only and borrowing regimes provide the best fit using data on consumption, business assets, investment, and income. Family and other networks help consumption smoothing there, as in a moral hazard constrained regime. In contrast, in urban areas, we find mechanism design financial/information regimes that are decidedly less constrained, with the moral hazard model fitting best combined business and consumption data. We perform numerous robustness checks in both the Thai data and in Monte Carlo simulations and compare our maximum likelihood criterion with results from other metrics and data not used in the estimation. A prototypical counterfactual policy evaluation exercise using the estimation results is also featured. PMID:25246710
NASA Astrophysics Data System (ADS)
Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten H.
2016-09-01
PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads. In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt ), along with some of the responses of the system, are used to investigate the controller performance and formulate the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade-off curves are given to assess the tunings of the poles- placement method and a constrained optimization problem is solved to find the best tuning.
NASA Astrophysics Data System (ADS)
Hadi, Fatemeh; Janbozorgi, Mohammad; Sheikhi, M. Reza H.; Metghalchi, Hameed
2016-10-01
The rate-controlled constrained-equilibrium (RCCE) method is employed to study the interactions between mixing and chemical reaction. Considering that mixing can influence the RCCE state, the key objective is to assess the accuracy and numerical performance of the method in simulations involving both reaction and mixing. The RCCE formulation includes rate equations for constraint potentials, density and temperature, which allows taking account of mixing alongside chemical reaction without splitting. The RCCE is a dimension reduction method for chemical kinetics based on thermodynamics laws. It describes the time evolution of reacting systems using a series of constrained-equilibrium states determined by RCCE constraints. The full chemical composition at each state is obtained by maximizing the entropy subject to the instantaneous values of the constraints. The RCCE is applied to a spatially homogeneous constant pressure partially stirred reactor (PaSR) involving methane combustion in oxygen. Simulations are carried out over a wide range of initial temperatures and equivalence ratios. The chemical kinetics, comprised of 29 species and 133 reaction steps, is represented by 12 RCCE constraints. The RCCE predictions are compared with those obtained by direct integration of the same kinetics, termed detailed kinetics model (DKM). The RCCE shows accurate prediction of combustion in PaSR with different mixing intensities. The method also demonstrates reduced numerical stiffness and overall computational cost compared to DKM.
Shieu, Wendy; Stauch, Oliver B; Maa, Yuh-Fun
2015-01-01
Syringe filling of high-concentration/viscosity monoclonal antibody formulations is a complex process that is not fully understood. This study, which builds on a previous investigation that used a bench-top syringe filling unit to examine formulation drying at the filling nozzle tip and subsequent nozzle clogging, further explores the impact of formulation-nozzle material interactions on formulation drying and nozzle clogging. Syringe-filling nozzles made of glass, stainless steel, or plastic (polypropylene, silicone, and Teflon®), which represent a full range of materials with hydrophilic and hydrophobic properties as quantified by contact angle measurements, were used to fill liquids of different viscosity, including a high-concentration monoclonal antibody formulation. Compared with hydrophilic nozzles, hydrophobic nozzles offered two unique features that discouraged formulation drying and nozzle clogging: (1) the liquid formulation is more likely to be withdrawn into the hydrophobic nozzle under the same suck-back conditions, and (2) the residual liquid film left on the nozzle wall when using high suck-back settings settles to form a liquid plug away from the hydrophobic nozzle tip. Making the tip of the nozzle hydrophobic (silicone-coating on glass and Teflon-coating stainless steel) could achieve the same suck-back performance as plastic nozzles. This study demonstrated that using hydrophobic nozzles are most effective in reducing the risk of nozzle clogging by drying of high-concentration monoclonal antibody formulation during extended nozzle idle time in a large-scale filling facility and environment. Syringe filling is a well-established manufacturing process and has been implemented by numerous contract manufacturing organizations and biopharmaceutical companies. However, its technical details and associated critical process parameters are rarely published. Information on high-concentration/viscosity formulation filling is particularly lacking. This study is the continuation of a previous investigation with a focus on understanding the impact of nozzle material on the suck-back function of liquid formulations. The findings identified the most critical parameter-nozzle material hydrophobicity-in alleviating formulation drying at the nozzle tip and eventually limiting the occurrence of nozzle clogging during the filling process. The outcomes of this study will benefit scientists and engineers who develop pre-filled syringe products by providing a better understanding of high-concentration formulation filling principles and challenges. © PDA, Inc. 2015.
Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints
NASA Technical Reports Server (NTRS)
Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale
1997-01-01
The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.
Transition in Gas Turbine Control System Architecture: Modular, Distributed, and Embedded
NASA Technical Reports Server (NTRS)
Culley, Dennis
2010-01-01
Controls systems are an increasingly important component of turbine-engine system technology. However, as engines become more capable, the control system itself becomes ever more constrained by the inherent environmental conditions of the engine; a relationship forced by the continued reliance on commercial electronics technology. A revolutionary change in the architecture of turbine-engine control systems will change this paradigm and result in fully distributed engine control systems. Initially, the revolution will begin with the physical decoupling of the control law processor from the hostile engine environment using a digital communications network and engine-mounted high temperature electronics requiring little or no thermal control. The vision for the evolution of distributed control capability from this initial implementation to fully distributed and embedded control is described in a roadmap and implementation plan. The development of this plan is the result of discussions with government and industry stakeholders
Nuchuchua, O; Every, H A; Hofland, G W; Jiskoot, W
2014-11-01
In this study, we evaluated the influence of supercritical carbon dioxide (scCO2) spray drying conditions, in the absence of organic solvent, on the ability to produce dry protein/trehalose formulations at 1:10 and 1:4 (w/w) ratios. When using a 4L drying vessel, we found that decreasing the solution flow rate and solution volume, or increasing the scCO2 flow rate resulted in a significant reduction in the residual water content in dried products (Karl Fischer titration). The best conditions were then used to evaluate the ability to scale the scCO2 spray drying process from 4L to 10L chamber. The ratio of scCO2 and solution flow rate was kept constant. The products on both scales exhibited similar residual moisture contents, particle morphologies (SEM), and glass transition temperatures (DSC). After reconstitution, the lysozyme activity (enzymatic assay) and structure (circular dichroism, HP-SEC) were fully preserved, but the sub-visible particle content was slightly increased (flow imaging microscopy, nanoparticle tracking analysis). Furthermore, the drying condition was applicable to other proteins resulting in products of similar quality as the lysozyme formulations. In conclusion, we established scCO2 spray drying processing conditions for protein formulations without an organic solvent that holds promise for the industrial production of dry protein formulations. Copyright © 2014 Elsevier B.V. All rights reserved.
PD-PK evaluation of freeze-dried atorvastatin calcium-loaded poly-ε-caprolactone nanoparticles.
Ahmed, Iman S; El-Hosary, Rania; Shalaby, Samia; Abd-Rabo, Marwa M; Elkhateeb, Dalia G; Nour, Samia
2016-05-17
In this work lyophilized poly-ε-caprolactone nanoparticles (NPs) loaded with atorvastatin calcium (AC) were developed in an attempt to improve the in-vivo performance of AC following oral administration. The individual and combined effects of several formulation variables were previously investigated using step-wise full factorial designs in order to produce optimized AC-NPs with predetermined characteristics including particle size, drug loading capacity, drug release profile and physical stability. Four optimized formulations were further subjected in this work to lyophilization to promote their long-term physical stability and were fully characterized. The pharmacodynamics (PD)/pharmacokinetics (PK) properties of two optimized freeze-dried AC-NPs formulations showing acceptable long-term stability were determined and compared to a marketed AC immediate release tablet (Lipitor(®)) in albino rats. PD results revealed that the two tested formulations were equally effective in reducing low density lipoproteins (LDL) and triglycerides (TG) levels when given in reduced doses compared to Lipitor(®) and showed no adverse effects. PK results, on the other hand, revealed that the two freeze-dried AC-NPs formulations were of significantly lower bioavailability compared to Lipitor(®). Taken together the PD and PK results demonstrate that the improved efficacy obtained at reduced doses from the freeze-dried AC-NPs could be due to increased concentration of AC in the liver rather than in the plasma. Copyright © 2016 Elsevier B.V. All rights reserved.
Stress formulation in the all-electron full-potential linearized augmented plane wave method
NASA Astrophysics Data System (ADS)
Nagasako, Naoyuki; Oguchi, Tamio
2012-02-01
Stress formulation in the linearlized augmented plane wave (LAPW) method has been proposed in 2002 [1] as an extension of the force formulation in the LAPW method [2]. However, pressure calculations only for Al and Si were reported in Ref.[1] and even now stress calculations have not yet been fully established in the LAPW method. In order to make it possible to efficiently relax lattice shape and atomic positions simultaneously and to precisely evaluate the elastic constants in the LAPW method, we reformulate stress formula in the LAPW method with the Soler-Williams representation [3]. Validity of the formulation is tested by comparing the pressure obtained as the trace of stress tensor with that estimated from total energies for a wide variety of material systems. Results show that pressure is estimated within the accuracy of less than 0.1 GPa. Calculations of the shear elastic constant show that the shear components of the stress tensor are also precisely computed with the present formulation [4].[4pt] [1] T. Thonhauser et al., Solid State Commun. 124, 275 (2002).[0pt] [2] R. Yu et al., Phys. Rev. B 43, 6411 (1991).[0pt] [3] J. M. Soler and A. R. Williams, Phys. Rev. B 40, 1560 (1989).[0pt] [4] N. Nagasako and T. Oguchi, J. Phys. Soc. Jpn. 80, 024701 (2011).
Kimura, Go; Puchkov, Maxim; Leuenberger, Hans
2013-07-01
Based on a Quality by Design (QbD) approach, it is important to follow International Conference on Harmonization (ICH) guidance Q8 (R2) recommendations to explore the design space. The application of an experimental design is, however, not sufficient because of the fact that it is necessary to take into account the effects of percolation theory. For this purpose, an adequate software needs to be applied, capable of detecting percolation thresholds as a function of the distribution of the functional powder particles. Formulation-computer aided design (F-CAD), originally designed to calculate in silico the drug dissolution profiles of a tablet formulation is, for example, a suitable software for this purpose. The study shows that F-CAD can calculate a good estimate of the disintegration time of a tablet formulation consisting of mefenamic acid. More important, F-CAD is capable of replacing expensive laboratory work by performing in silico experiments for the exploration of the formulation design space according to ICH guidance Q8 (R2). As a consequence, a similar workflow existing as best practice in the automotive and aircraft industry can be adopted by the pharmaceutical industry: The drug delivery vehicle can be first fully designed and tested in silico, which will improve the quality of the marketed formulation and save time and money. Copyright © 2013 Wiley Periodicals, Inc.
Strehlenert, H; Richter-Sundberg, L; Nyström, M E; Hasson, H
2015-12-08
Evidence has come to play a central role in health policymaking. However, policymakers tend to use other types of information besides research evidence. Most prior studies on evidence-informed policy have focused on the policy formulation phase without a systematic analysis of its implementation. It has been suggested that in order to fully understand the policy process, the analysis should include both policy formulation and implementation. The purpose of the study was to explore and compare two policies aiming to improve health and social care in Sweden and to empirically test a new conceptual model for evidence-informed policy formulation and implementation. Two concurrent national policies were studied during the entire policy process using a longitudinal, comparative case study approach. Data was collected through interviews, observations, and documents. A Conceptual Model for Evidence-Informed Policy Formulation and Implementation was developed based on prior frameworks for evidence-informed policymaking and policy dissemination and implementation. The conceptual model was used to organize and analyze the data. The policies differed regarding the use of evidence in the policy formulation and the extent to which the policy formulation and implementation phases overlapped. Similarities between the cases were an emphasis on capacity assessment, modified activities based on the assessment, and a highly active implementation approach relying on networks of stakeholders. The Conceptual Model for Evidence-Informed Policy Formulation and Implementation was empirically useful to organize the data. The policy actors' roles and functions were found to have a great influence on the choices of strategies and collaborators in all policy phases. The Conceptual Model for Evidence-Informed Policy Formulation and Implementation was found to be useful. However, it provided insufficient guidance for analyzing actors involved in the policy process, capacity-building strategies, and overlapping policy phases. A revised version of the model that includes these aspects is suggested.
Use of multi-node wells in the Groundwater-Management Process of MODFLOW-2005 (GWM-2005)
Ahlfeld, David P.; Barlow, Paul M.
2013-01-01
Many groundwater wells are open to multiple aquifers or to multiple intervals within a single aquifer. These types of wells can be represented in numerical simulations of groundwater flow by use of the Multi-Node Well (MNW) Packages developed for the U.S. Geological Survey’s MODFLOW model. However, previous versions of the Groundwater-Management (GWM) Process for MODFLOW did not allow the use of multi-node wells in groundwater-management formulations. This report describes modifications to the MODFLOW–2005 version of the GWM Process (GWM–2005) to provide for such use with the MNW2 Package. Multi-node wells can be incorporated into a management formulation as flow-rate decision variables for which optimal withdrawal or injection rates will be determined as part of the GWM–2005 solution process. In addition, the heads within multi-node wells can be used as head-type state variables, and, in that capacity, be included in the objective function or constraint set of a management formulation. Simple head bounds also can be defined to constrain water levels at multi-node wells. The report provides instructions for including multi-node wells in the GWM–2005 data-input files and a sample problem that demonstrates use of multi-node wells in a typical groundwater-management problem.
DYNA3D: A computer code for crashworthiness engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallquist, J.O.; Benson, D.J.
1986-09-01
A finite element program with crashworthiness applications has been developed at LLNL. DYNA3D, an explicit, fully vectorized, finite deformation structural dynamics program, has four capabilities that are critical for the efficient and realistic modeling crash phenomena: (1) fully optimized nonlinear solid, shell, and beam elements for representing a structure; (2) a broad range of constitutive models for simulating material behavior; (3) sophisticated contact algorithms for impact interactions; (4) a rigid body capability to represent the bodies away from the impact region at a greatly reduced cost without sacrificing accuracy in the momentum calculations. Basic methodologies of the program are brieflymore » presented along with several crashworthiness calculations. Efficiencies of the Hughes-Liu and Belytschko-Tsay shell formulations are considered.« less
Uncertainties in building a strategic defense.
Zraket, C A
1987-03-27
Building a strategic defense against nuclear ballistic missiles involves complex and uncertain functional, spatial, and temporal relations. Such a defensive system would evolve and grow over decades. It is too complex, dynamic, and interactive to be fully understood initially by design, analysis, and experiments. Uncertainties exist in the formulation of requirements and in the research and design of a defense architecture that can be implemented incrementally and be fully tested to operate reliably. The analysis and measurement of system survivability, performance, and cost-effectiveness are critical to this process. Similar complexities exist for an adversary's system that would suppress or use countermeasures against a missile defense. Problems and opportunities posed by these relations are described, with emphasis on the unique characteristics and vulnerabilities of space-based systems.
Medvigy, David; Moorcroft, Paul R
2012-01-19
Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, S; Zhang, Y; Ma, J
Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less
Equivalent theories redefine Hamiltonian observables to exhibit change in general relativity
NASA Astrophysics Data System (ADS)
Pitts, J. Brian
2017-03-01
Change and local spatial variation are missing in canonical General Relativity’s observables as usually defined, an aspect of the problem of time. Definitions can be tested using equivalent formulations of a theory, non-gauge and gauge, because they must have equivalent observables and everything is observable in the non-gauge formulation. Taking an observable from the non-gauge formulation and finding the equivalent in the gauge formulation, one requires that the equivalent be an observable, thus constraining definitions. For massive photons, the de Broglie-Proca non-gauge formulation observable {{A}μ} is equivalent to the Stueckelberg-Utiyama gauge formulation quantity {{A}μ}+{{\\partial}μ}φ, which must therefore be an observable. To achieve that result, observables must have 0 Poisson bracket not with each first-class constraint, but with the Rosenfeld-Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints, in accord with the Pons-Salisbury-Sundermeyer definition of observables. The definition for external gauge symmetries can be tested using massive gravity, where one can install gauge freedom by parametrization with clock fields X A . The non-gauge observable {{g}μ ν} has the gauge equivalent {{X}A}{{,}μ}{{g}μ ν}{{X}B}{{,}ν}. The Poisson bracket of {{X}A}{{,}μ}{{g}μ ν}{{X}B}{{,}ν} with G turns out to be not 0 but a Lie derivative. This non-zero Poisson bracket refines and systematizes Kuchař’s proposal to relax the 0 Poisson bracket condition with the Hamiltonian constraint. Thus observables need covariance, not invariance, in relation to external gauge symmetries. The Lagrangian and Hamiltonian for massive gravity are those of General Relativity + Λ + 4 scalars, so the same definition of observables applies to General Relativity. Local fields such as {{g}μ ν} are observables. Thus observables change. Requiring equivalent observables for equivalent theories also recovers Hamiltonian-Lagrangian equivalence.
The signal of mantle anisotropy in the coupling of normal modes
NASA Astrophysics Data System (ADS)
Beghein, Caroline; Resovsky, Joseph; van der Hilst, Robert D.
2008-12-01
We investigate whether the coupling of normal mode (NM) multiplets can help us constrain mantle anisotropy. We first derive explicit expressions of the generalized structure coefficients of coupled modes in terms of elastic coefficients, including the Love parameters describing radial anisotropy and the parameters describing azimuthal anisotropy (Jc, Js, Kc, Ks, Mc, Ms, Bc, Bs, Gc, Gs, Ec, Es, Hc, Hs, Dc and Ds). We detail the selection rules that describe which modes can couple together and which elastic parameters govern their coupling. We then focus on modes of type 0Sl - 0Tl+1 and determine whether they can be used to constrain mantle anisotropy. We show that they are sensitive to six elastic parameters describing azimuthal anisotropy, in addition to the two shear-wave elastic parameters L and N (i.e. VSV and VSH). We find that neither isotropic nor radially anisotropic mantle models can fully explain the observed degree two signal. We show that the NM signal that remains after correction for the effect of the crust and mantle radial anisotropy can be explained by the presence of azimuthal anisotropy in the upper mantle. Although the data favour locating azimuthal anisotropy below 400km, its depth extent and distribution is still not well constrained by the data. Consideration of NM coupling can thus help constrain azimuthal anisotropy in the mantle, but joint analyses with surface-wave phase velocities is needed to reduce the parameter trade-offs and improve our constraints on the individual elastic parameters and the depth location of the azimuthal anisotropy.
Ultrasonic Mixing of Epoxy Curing Agents.
1983-05-01
Li~fl , • 4 Future generation aircraft need higher performance polymer matrices to fully achieve the weight savings possible with composite materials...ref. 1). New resins are being formulated in an effort to understand basic polymer behav- ior and to develop improved resins (refs. 2, 3 and 4). Some... polymer /curing agent combinations that could be useful, cannot be mixed properly using conven- tional methods because of the high melting temperature
Geometric convex cone volume analysis
NASA Astrophysics Data System (ADS)
Li, Hsiao-Chi; Chang, Chein-I.
2016-05-01
Convexity is a major concept used to design and develop endmember finding algorithms (EFAs). For abundance unconstrained techniques, Pixel Purity Index (PPI) and Automatic Target Generation Process (ATGP) which use Orthogonal Projection (OP) as a criterion, are commonly used method. For abundance partially constrained techniques, Convex Cone Analysis is generally preferred which makes use of convex cones to impose Abundance Non-negativity Constraint (ANC). For abundance fully constrained N-FINDR and Simplex Growing Algorithm (SGA) are most popular methods which use simplex volume as a criterion to impose ANC and Abundance Sum-to-one Constraint (ASC). This paper analyze an issue encountered in volume calculation with a hyperplane introduced to illustrate an idea of bounded convex cone. Geometric Convex Cone Volume Analysis (GCCVA) projects the boundary vectors of a convex cone orthogonally on a hyperplane to reduce the effect of background signatures and a geometric volume approach is applied to address the issue arose from calculating volume and further improve the performance of convex cone-based EFAs.
A precision search for WIMPs with charged cosmic rays
NASA Astrophysics Data System (ADS)
Reinert, Annika; Winkler, Martin Wolfgang
2018-01-01
AMS-02 has reached the sensitivity to probe canonical thermal WIMPs by their annihilation into antiprotons. Due to the high precision of the data, uncertainties in the astrophysical background have become the most limiting factor for indirect dark matter detection. In this work we systematically quantify and—where possible—reduce uncertainties in the antiproton background. We constrain the propagation of charged cosmic rays through the combination of antiproton, B/C and positron data. Cross section uncertainties are determined from a wide collection of accelerator data and are—for the first time ever—fully taken into account. This allows us to robustly constrain even subdominant dark matter signals through their spectral properties. For a standard NFW dark matter profile we are able to exclude thermal WIMPs with masses up to 570 GeV which annihilate into bottom quarks. While we confirm a reported excess compatible with dark matter of mass around 80 GeV, its local (global) significance only reaches 2.2 σ (1.1 σ) in our analysis.
Geometric constrained variational calculus I: Piecewise smooth extremals
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2015-05-01
A geometric setup for constrained variational calculus is presented. The analysis deals with the study of the extremals of an action functional defined on piecewise differentiable curves, subject to differentiable, non-holonomic constraints. Special attention is paid to the tensorial aspects of the theory. As far as the kinematical foundations are concerned, a fully covariant scheme is developed through the introduction of the concept of infinitesimal control. The standard classification of the extremals into normal and abnormal ones is discussed, pointing out the existence of an algebraic algorithm assigning to each admissible curve a corresponding abnormality index, related to the co-rank of a suitable linear map. Attention is then shifted to the study of the first variation of the action functional. The analysis includes a revisitation of Pontryagin's equations and of the Lagrange multipliers method, as well as a reformulation of Pontryagin's algorithm in Hamiltonian terms. The analysis is completed by a general result, concerning the existence of finite deformations with fixed endpoints.
Large deformation image classification using generalized locality-constrained linear coding.
Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian
2013-01-01
Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.
Computing an upper bound on contact stress with surrogate duality
NASA Astrophysics Data System (ADS)
Xuan, Zhaocheng; Papadopoulos, Panayiotis
2016-07-01
We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.
Analytical investigations in aircraft and spacecraft trajectory optimization and optimal guidance
NASA Technical Reports Server (NTRS)
Markopoulos, Nikos; Calise, Anthony J.
1995-01-01
A collection of analytical studies is presented related to unconstrained and constrained aircraft (a/c) energy-state modeling and to spacecraft (s/c) motion under continuous thrust. With regard to a/c unconstrained energy-state modeling, the physical origin of the singular perturbation parameter that accounts for the observed 2-time-scale behavior of a/c during energy climbs is identified and explained. With regard to the constrained energy-state modeling, optimal control problems are studied involving active state-variable inequality constraints. Departing from the practical deficiencies of the control programs for such problems that result from the traditional formulations, a complete reformulation is proposed for these problems which, in contrast to the old formulation, will presumably lead to practically useful controllers that can track an inequality constraint boundary asymptotically, and even in the presence of 2-sided perturbations about it. Finally, with regard to s/c motion under continuous thrust, a thrust program is proposed for which the equations of 2-dimensional motion of a space vehicle in orbit, viewed as a point mass, afford an exact analytic solution. The thrust program arises under the assumption of tangential thrust from the costate system corresponding to minimum-fuel, power-limited, coplanar transfers between two arbitrary conics. The thrust program can be used not only with power-limited propulsion systems, but also with any propulsion system capable of generating continuous thrust of controllable magnitude, and, for propulsion types and classes of transfers for which it is sufficiently optimal the results of this report suggest a method of maneuvering during planetocentric or heliocentric orbital operations, requiring a minimum amount of computation; thus uniquely suitable for real-time feedback guidance implementations.
A distributed algorithm for demand-side management: Selling back to the grid.
Latifi, Milad; Khalili, Azam; Rastegarnia, Amir; Zandi, Sajad; Bazzi, Wael M
2017-11-01
Demand side energy consumption scheduling is a well-known issue in the smart grid research area. However, there is lack of a comprehensive method to manage the demand side and consumer behavior in order to obtain an optimum solution. The method needs to address several aspects, including the scale-free requirement and distributed nature of the problem, consideration of renewable resources, allowing consumers to sell electricity back to the main grid, and adaptivity to a local change in the solution point. In addition, the model should allow compensation to consumers and ensurance of certain satisfaction levels. To tackle these issues, this paper proposes a novel autonomous demand side management technique which minimizes consumer utility costs and maximizes consumer comfort levels in a fully distributed manner. The technique uses a new logarithmic cost function and allows consumers to sell excess electricity (e.g. from renewable resources) back to the grid in order to reduce their electric utility bill. To develop the proposed scheme, we first formulate the problem as a constrained convex minimization problem. Then, it is converted to an unconstrained version using the segmentation-based penalty method. At each consumer location, we deploy an adaptive diffusion approach to obtain the solution in a distributed fashion. The use of adaptive diffusion makes it possible for consumers to find the optimum energy consumption schedule with a small number of information exchanges. Moreover, the proposed method is able to track drifts resulting from changes in the price parameters and consumer preferences. Simulations and numerical results show that our framework can reduce the total load demand peaks, lower the consumer utility bill, and improve the consumer comfort level.
NASA Astrophysics Data System (ADS)
Filho, Sebastião Mauro
2017-01-01
In this thesis we applied the perturbative method, on a classical level, to the fourth-order gravity and the Renormalization Group extended General Relativity (RGGR). We will consider auxiliary fields formulation for the general fourth-order gravity on an arbitrary curved back-ground to analyze the metric perturbations in this theory. The case of a Ricci-flat background was elaborated in detail. We noticed that the use of auxiliary fields helps to make the pertur-bative analysis easier and the results more clear. As an application we reconsider the stability problem of the Schwarzschild and Kerr black holes in the fourth-order gravity. We also used the perturbative method to develop the Newtonian and post-Newtonian limits of RGGR. In the Solar System, RGGR depends on a single dimensionless parameter 0, and this parameter is such that for 0 = 0 one fully recovers General Relativity in the Solar System. In order to study the Newtonian limit we used the conformal transformation technique and the dynamics of the Laplace-Runge-Lenz vector (LRL). In this way, we could estimate the upper bound for 0 within the Solar System in two case: the case where the external potential effect is considered and the another when it is not considered. Previously this parameter was constrained to be 0 < 10-21, without considering the external potential effect. However, as we showed, when such an effect is considered this bound increases by five orders of magnitude, O < 10-16. Moreover, we showed that under a certain approximation RGGR can be easily tested using the parametrized post-Newtonian (PPN) formalism.
Sequentially reweighted TV minimization for CT metal artifact reduction.
Zhang, Xiaomeng; Xing, Lei
2013-07-01
Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.
A bioequivalence study of two memantine formulations in healthy Chinese male volunteers .
Deng, Ying; Zhuang, Jialang; Wu, Jingguo; Chen, Jiangying; Ding, Liang; Wang, Xueding; Huang, Lihui; Zeng, Guixiong; Chen, Jie; Ma, Zhongfu; Chen, Xiao; Zhong, Guoping; Huang, Min; Zhao, Xianglan
2017-10-01
The aim of the current study is to evaluate the bioequivalence between the test and reference formulations of memantine in a single-dose, two-period and two-sequence crossover study with a 44-day washout interval. A total of 20 healthy Chinese male volunteers were enrolled and completed the study, after oral administration of single doses of 10 mg test and reference formulations of memantine. The blood samples were collected at different time points and memantine concentrations were determined by a fully validated HPLC-MS/MS method. The evaluated pharmacokinetic parameters (test vs. reference) including Cmax (18 ± 3.2 vs. 17.8 ± 3.4), AUC0-t (1,188.5 ± 222.2 vs. 1,170.9 ± 135.7), and AUC0-∞ (1,353.3 ± 258.6 vs. 1,291.3 ± 136.7) values were assessed for bioequivalence based on current guidelines. The observed pharmacokinetic parameters of memantine test drug were similar to those of the reference formulation. The 90% confidence intervals of test/reference ratios for Cmax, AUC0-t, and AUC0-∞ were within the bioequivalence acceptance range of 80 - 125%. The results obtained from the healthy Chinese subjects in this study suggests that the test formulation of memantine 10 mg tablet is bioequivalent to the reference formulation (Ebixa®10 mg tablet). .
SEACAS Theory Manuals: Part III. Finite Element Analysis in Nonlinear Solid Mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laursen, T.A.; Attaway, S.W.; Zadoks, R.I.
1999-03-01
This report outlines the application of finite element methodology to large deformation solid mechanics problems, detailing also some of the key technological issues that effective finite element formulations must address. The presentation is organized into three major portions: first, a discussion of finite element discretization from the global point of view, emphasizing the relationship between a virtual work principle and the associated fully discrete system, second, a discussion of finite element technology, emphasizing the important theoretical and practical features associated with an individual finite element; and third, detailed description of specific elements that enjoy widespread use, providing some examples ofmore » the theoretical ideas already described. Descriptions of problem formulation in nonlinear solid mechanics, nonlinear continuum mechanics, and constitutive modeling are given in three companion reports.« less
Weak Galerkin method for the Biot’s consolidation model
Hu, Xiaozhe; Mu, Lin; Ye, Xiu
2017-08-23
In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less
Weak Galerkin method for the Biot’s consolidation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiaozhe; Mu, Lin; Ye, Xiu
In this study, we develop a weak Galerkin (WG) finite element method for the Biot’s consolidation model in the classical displacement–pressure two-field formulation. Weak Galerkin linear finite elements are used for both displacement and pressure approximations in spatial discretizations. Backward Euler scheme is used for temporal discretization in order to obtain an implicit fully discretized scheme. We study the well-posedness of the linear system at each time step and also derive the overall optimal-order convergence of the WG formulation. Such WG scheme is designed on general shape regular polytopal meshes and provides stable and oscillation-free approximation for the pressure withoutmore » special treatment. Lastlyl, numerical experiments are presented to demonstrate the efficiency and accuracy of the proposed weak Galerkin finite element method.« less
Validation of the Fully-Coupled Air-Sea-Wave COAMPS System
NASA Astrophysics Data System (ADS)
Smith, T.; Campbell, T. J.; Chen, S.; Gabersek, S.; Tsu, J.; Allard, R. A.
2017-12-01
A fully-coupled, air-sea-wave numerical model, COAMPS®, has been developed by the Naval Research Laboratory to further enhance understanding of oceanic, atmospheric, and wave interactions. The fully-coupled air-sea-wave system consists of an atmospheric component with full physics parameterizations, an ocean model, NCOM (Navy Coastal Ocean Model), and two wave components, SWAN (Simulating Waves Nearshore) and WaveWatch III. Air-sea interactions between the atmosphere and ocean components are accomplished through bulk flux formulations of wind stress and sensible and latent heat fluxes. Wave interactions with the ocean include the Stokes' drift, surface radiation stresses, and enhancement of the bottom drag coefficient in shallow water due to the wave orbital velocities at the bottom. In addition, NCOM surface currents are provided to SWAN and WaveWatch III to simulate wave-current interaction. The fully-coupled COAMPS system was executed for several regions at both regional and coastal scales for the entire year of 2015, including the U.S. East Coast, Western Pacific, and Hawaii. Validation of COAMPS® includes observational data comparisons and evaluating operational performance on the High Performance Computing (HPC) system for each of these regions.
Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.
Heydari, Ali; Balakrishnan, Sivasubramanya N
2013-01-01
To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.
Origin and Evolutionary Alteration of the Mitochondrial Import System in Eukaryotic Lineages
Fukasawa, Yoshinori; Oda, Toshiyuki; Tomii, Kentaro
2017-01-01
Abstract Protein transport systems are fundamentally important for maintaining mitochondrial function. Nevertheless, mitochondrial protein translocases such as the kinetoplastid ATOM complex have recently been shown to vary in eukaryotic lineages. Various evolutionary hypotheses have been formulated to explain this diversity. To resolve any contradiction, estimating the primitive state and clarifying changes from that state are necessary. Here, we present more likely primitive models of mitochondrial translocases, specifically the translocase of the outer membrane (TOM) and translocase of the inner membrane (TIM) complexes, using scrutinized phylogenetic profiles. We then analyzed the translocases’ evolution in eukaryotic lineages. Based on those results, we propose a novel evolutionary scenario for diversification of the mitochondrial transport system. Our results indicate that presequence transport machinery was mostly established in the last eukaryotic common ancestor, and that primitive translocases already had a pathway for transporting presequence-containing proteins. Moreover, secondary changes including convergent and migrational gains of a presequence receptor in TOM and TIM complexes, respectively, likely resulted from constrained evolution. The nature of a targeting signal can constrain alteration to the protein transport complex. PMID:28369657
A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems
Kouri, Drew Philip
2017-12-19
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elbert, Stephen T.; Kalsi, Karanjit; Vlachopoulou, Maria
Financial Transmission Rights (FTRs) help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, a novel non-linear dynamical system (NDS) approach is proposed tomore » solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on large-scale systems using data from the Western Electricity Coordinating Council (WECC). The NDS is demonstrated to outperform the widely used CPLEX algorithms while exhibiting superior scalability. Furthermore, the NDS based solver can be easily parallelized which results in significant computational improvement.« less
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.
Constrained orbital intercept-evasion
NASA Astrophysics Data System (ADS)
Zatezalo, Aleksandar; Stipanovic, Dusan M.; Mehra, Raman K.; Pham, Khanh
2014-06-01
An effective characterization of intercept-evasion confrontations in various space environments and a derivation of corresponding solutions considering a variety of real-world constraints are daunting theoretical and practical challenges. Current and future space-based platforms have to simultaneously operate as components of satellite formations and/or systems and at the same time, have a capability to evade potential collisions with other maneuver constrained space objects. In this article, we formulate and numerically approximate solutions of a Low Earth Orbit (LEO) intercept-maneuver problem in terms of game-theoretic capture-evasion guaranteed strategies. The space intercept-evasion approach is based on Liapunov methodology that has been successfully implemented in a number of air and ground based multi-player multi-goal game/control applications. The corresponding numerical algorithms are derived using computationally efficient and orbital propagator independent methods that are previously developed for Space Situational Awareness (SSA). This game theoretical but at the same time robust and practical approach is demonstrated on a realistic LEO scenario using existing Two Line Element (TLE) sets and Simplified General Perturbation-4 (SGP-4) propagator.
Gao, Yuan; Zhou, Weigui; Ao, Hong; Chu, Jian; Zhou, Quan; Zhou, Bo; Wang, Kang; Li, Yi; Xue, Peng
2016-01-01
With the increasing demands for better transmission speed and robust quality of service (QoS), the capacity constrained backhaul gradually becomes a bottleneck in cooperative wireless networks, e.g., in the Internet of Things (IoT) scenario in joint processing mode of LTE-Advanced Pro. This paper focuses on resource allocation within capacity constrained backhaul in uplink cooperative wireless networks, where two base stations (BSs) equipped with single antennae serve multiple single-antennae users via multi-carrier transmission mode. In this work, we propose a novel cooperative transmission scheme based on compress-and-forward with user pairing to solve the joint mixed integer programming problem. To maximize the system capacity under the limited backhaul, we formulate the joint optimization problem of user sorting, subcarrier mapping and backhaul resource sharing among different pairs (subcarriers for users). A novel robust and efficient centralized algorithm based on alternating optimization strategy and perfect mapping is proposed. Simulations show that our novel method can improve the system capacity significantly under the constraint of the backhaul resource compared with the blind alternatives. PMID:27077865
Trajectory optimization for the National aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1993-01-01
While continuing the application of the inverse dynamics approach in obtaining the optimal numerical solutions, the research during the past six months has been focused on the formulation and derivation of closed-form solutions for constrained hypersonic flight trajectories. Since it was found in the research of the first year that a dominant portion of the optimal ascent trajectory of the aerospace plane is constrained by dynamic pressure and heating constraints, the application of the analytical solutions significantly enhances the efficiency in trajectory optimization, provides a better insight to understanding of the trajectory and conceivably has great potential in guidance of the vehicle. Work of this period has been reported in four technical papers. Two of the papers were presented in the AIAA Guidance, Navigation, and Control Conference (Hilton Head, SC, August, 1992) and Fourth International Aerospace Planes Conference (Orlando, FL, December, 1992). The other two papers have been accepted for publication by Journal of Guidance, Control, and Dynamics, and will appear in 1993. This report briefly summarizes the work done in the past six months and work currently underway.
Recursive algorithms for phylogenetic tree counting.
Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J
2013-10-28
In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.
Morris, Melody K.; Saez-Rodriguez, Julio; Clarke, David C.; Sorger, Peter K.; Lauffenburger, Douglas A.
2011-01-01
Predictive understanding of cell signaling network operation based on general prior knowledge but consistent with empirical data in a specific environmental context is a current challenge in computational biology. Recent work has demonstrated that Boolean logic can be used to create context-specific network models by training proteomic pathway maps to dedicated biochemical data; however, the Boolean formalism is restricted to characterizing protein species as either fully active or inactive. To advance beyond this limitation, we propose a novel form of fuzzy logic sufficiently flexible to model quantitative data but also sufficiently simple to efficiently construct models by training pathway maps on dedicated experimental measurements. Our new approach, termed constrained fuzzy logic (cFL), converts a prior knowledge network (obtained from literature or interactome databases) into a computable model that describes graded values of protein activation across multiple pathways. We train a cFL-converted network to experimental data describing hepatocytic protein activation by inflammatory cytokines and demonstrate the application of the resultant trained models for three important purposes: (a) generating experimentally testable biological hypotheses concerning pathway crosstalk, (b) establishing capability for quantitative prediction of protein activity, and (c) prediction and understanding of the cytokine release phenotypic response. Our methodology systematically and quantitatively trains a protein pathway map summarizing curated literature to context-specific biochemical data. This process generates a computable model yielding successful prediction of new test data and offering biological insight into complex datasets that are difficult to fully analyze by intuition alone. PMID:21408212
Low Mach number fluctuating hydrodynamics for electrolytes
Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; ...
2016-11-18
Here, we formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are also interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids (A. Donev, et al., Physics of Fluids, 27, 3, 2015), we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the massmore » and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. Furthermore, we demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second-order in the deterministic setting, and for length scales much greater than the Debye length gives results consistent with an electroneutral/ambipolar approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.« less
Kesisoglou, Filippos; Rossenu, Stefaan; Farrell, Colm; Van Den Heuvel, Michiel; Prohn, Marita; Fitzpatrick, Shaun; De Kam, Pieter-Jan; Vargo, Ryan
2014-11-01
Development of in vitro-in vivo correlations (IVIVCs) for extended-release (ER) products is commonly pursued during pharmaceutical development to increase product understanding, set release specifications, and support biowaivers. This manuscript details the development of Level C and Level A IVIVCs for ER formulations of niacin, a highly variable and extensively metabolized compound. Three ER formulations were screened in a cross-over study against immediate-release niacin. A Multiple Level C IVIVC was established for both niacin and its primary metabolite nicotinuric acid (NUA) as well as total niacin metabolites urinary excretion. For NUA, but not for niacin, Level A IVIVC models with acceptable prediction errors were achievable via a modified IVIVC rather than a traditional deconvolution/convolution approach. Hence, this is in contradiction with current regulatory guidelines that suggest that when a Multiple Level C IVIVC is established, Level A models should also be readily achievable. We demonstrate that for a highly variable, highly metabolized compound such as niacin, development of a Level A IVIVC model fully validated according to agency guidelines may be challenging. However, Multiple Level C models are achievable and could be used to guide release specifications and formulation/manufacturing changes. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maharrey, Sean P.; Wiese-Smith, Deneille; Highley, Aaron M.
2014-03-01
Simultaneous Thermogravimetric Modulated Beam Mass Spectrometry (STMBMS) measurements have been conducted on a new Insensitive Munitions (IM) formulation. IMX-101 is the first explosive to be fully IM qualified under new NATO STANAG guidelines for fielded munitions. The formulation uses dinitroanisole (DNAN) as a new melt cast material to replace TNT, and shows excellent IM performance when formulated with other energetic ingredients. The scope of this work is to explain this superior IM performance by investigating the reactive processes occurring in the material when subjected to a well-controlled thermal environment. The dominant reactive processes observed were a series of complex chemicalmore » interactions between the three main ingredients (DNAN, NQ, and NTO) that occurs well below the onset of the normal decomposition process of any of the individual ingredients. This process shifts the thermal response of the formulations to a much lower temperature, where the kinetically controlled reaction processes are much slower. This low temperature shift has the effect of allowing the reactions to consume the reactive solids (NQ, NTO) well before the reaction rates increase and reach thermal runaway, resulting in a relatively benign response to the external stimuli. The main findings on the interaction processes are presented.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
...EPA is granting direct final approval of a revision to the Texas State Implementation Plan (SIP) concerning the Texas Low Emission Diesel fuel rules. The revisions clarify existing definitions and provisions, revise the approval procedures for alternative diesel fuel formulations, add new registration requirements, and update the rule to reflect the current program status because the rule is now fully implemented. This SIP revision meets statutory requirements.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
...EPA is proposing approval of a revision to the Texas State Implementation Plan (SIP) concerning the Texas Low Emission Diesel (TxLED) Fuel rules. The revisions clarify existing definitions and provisions, revise the approval procedures for alternative diesel fuel formulations, add new registration requirements, and update the rule to reflect the current program status because the rule is now fully implemented. This SIP revision meets statutory requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Jinn-Ouk; Hwang, Jai-chan; Noh, Hyerim
We present a complete set of exact and fully non-linear equations describing all three types of cosmological perturbations—scalar, vector and tensor perturbations. We derive the equations in a thoroughly gauge-ready manner, so that any spatial and temporal gauge conditions can be employed. The equations are completely general without any physical restriction except that we assume a flat homogeneous and isotropic universe as a background. We also comment briefly on the application of our formulation to the non-expanding Minkowski background.
Family System of Advanced Charring Ablators for Planetary Exploration Missions
NASA Technical Reports Server (NTRS)
Congdon, William M.; Curry, Donald M.
2005-01-01
Advanced Ablators Program Objectives: 1) Flight-ready(TRL-6) ablative heat shields for deep-space missions; 2) Diversity of selection from family-system approach; 3) Minimum weight systems with high reliability; 4) Optimized formulations and processing; 5) Fully characterized properties; and 6) Low-cost manufacturing. Definition and integration of candidate lightweight structures. Test and analysis database to support flight-vehicle engineering. Results from production scale-up studies and production-cost analyses.
An Unconditionally Stable Fully Conservative Semi-Lagrangian Method (PREPRINT)
2010-08-07
Alessandrini. An Hamiltonian interface SPH formulation for multi-fluid and free surface flows . J. of Comput. Phys., 228(22):8380–8393, 2009. [11] J.T...and J. Welch. Numerical Calculation of Time-Dependent Viscous Incompressible Flow of Fluid with Free Surface . Phys. Fluids, 8:2182–2189, 1965. [14... flow is divergence free , one would generally expect these lines to be commensurate, however, due to numerical errors in interpolation there is some
Guzman, David Sanchez-Migallon; Drazenovich, Tracy L.; Olsen, Glenn H.; Willits, Neil H.; Paul-Murphy, Joanne R.
2013-01-01
Conclusions and Clinical Relevance—Hydromorphone at the doses evaluated significantly increased the thermal nociception threshold for American kestrels for 3 to 6 hours. Additional studies with other types of stimulation, formulations, dosages, routes of administration, and testing times are needed to fully evaluate the analgesic and adverse effects of hydromorphone in kestrels and other avian species and the use of hydromorphone in clinical settings.
PSQP: Puzzle Solving by Quadratic Programming.
Andalo, Fernanda A; Taubin, Gabriel; Goldenstein, Siome
2017-02-01
In this article we present the first effective method based on global optimization for the reconstruction of image puzzles comprising rectangle pieces-Puzzle Solving by Quadratic Programming (PSQP). The proposed novel mathematical formulation reduces the problem to the maximization of a constrained quadratic function, which is solved via a gradient ascent approach. The proposed method is deterministic and can deal with arbitrary identical rectangular pieces. We provide experimental results showing its effectiveness when compared to state-of-the-art approaches. Although the method was developed to solve image puzzles, we also show how to apply it to the reconstruction of simulated strip-shredded documents, broadening its applicability.
Quantization of a U(1) gauged chiral boson in the Batalin-Fradkin-Vilkovisky scheme
NASA Astrophysics Data System (ADS)
Ghosh, Subir
1994-03-01
The scheme developed by Batalin, Fradkin, and Vilkovisky (BFV) to convert a second-class constrained system to a first-class one (having gauge invariance) is used in the Floreanini-Jackiw formulation of the chiral boson interacting with a U(1) gauge field. Explicit expressions of the BRST charge, the unitarizing Hamiltonian, and the BRST invariant effective action are provided and the full quantization is carried through. The spectra in both cases have been analyzed to show the presence of the proper chiral components explicitly. In the gauged model, Wess-Zumino terms in terms of the Batalin-Fradkin fields are identified.
Quantization of a U(1) gauged chiral boson in the Batalin-Fradkin-Vilkovisky scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, S.
1994-03-15
The scheme developed by Batalin, Fradkin, and Vilkovisky (BFV) to convert a second-class constrained system to a first-class one (having gauge invariance) is used in the Floreanini-Jackiw formulation of the chiral boson interacting with a U(1) gauge field. Explicit expressions of the BRST charge, the unitarizing Hamiltonian, and the BRST invariant effective action are provided and the full quantization is carried through. The spectra in both cases have been analyzed to show the presence of the proper chiral components explicitly. In the gauged model, Wess-Zumino terms in terms of the Batalin-Fradkin fields are identified.
Canonical quantization of general relativity in discrete space-times.
Gambini, Rodolfo; Pullin, Jorge
2003-01-17
It has long been recognized that lattice gauge theory formulations, when applied to general relativity, conflict with the invariance of the theory under diffeomorphisms. We analyze discrete lattice general relativity and develop a canonical formalism that allows one to treat constrained theories in Lorentzian signature space-times. The presence of the lattice introduces a "dynamical gauge" fixing that makes the quantization of the theories conceptually clear, albeit computationally involved. The problem of a consistent algebra of constraints is automatically solved in our approach. The approach works successfully in other field theories as well, including topological theories. A simple cosmological application exhibits quantum elimination of the singularity at the big bang.
Joint fMRI analysis and subject clustering using sparse dictionary learning
NASA Astrophysics Data System (ADS)
Kim, Seung-Jun; Dontaraju, Krishna K.
2017-08-01
Multi-subject fMRI data analysis methods based on sparse dictionary learning are proposed. In addition to identifying the component spatial maps by exploiting the sparsity of the maps, clusters of the subjects are learned by postulating that the fMRI volumes admit a subspace clustering structure. Furthermore, in order to tune the associated hyper-parameters systematically, a cross-validation strategy is developed based on entry-wise sampling of the fMRI dataset. Efficient algorithms for solving the proposed constrained dictionary learning formulations are developed. Numerical tests performed on synthetic fMRI data show promising results and provides insights into the proposed technique.
On the importance of collective excitations for thermal transport in graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gill-Comeau, Maxime; Lewis, Laurent J., E-mail: Laurent.Lewis@UMontreal.CA
2015-05-11
We use equilibrium molecular dynamics (MD) simulations to study heat transport in bulk single-layer graphene. Through a modal analysis of the MD trajectories employing a time-domain formulation, we find that collective excitations involving flexural acoustic (ZA) phonons, which have been neglected in the previous MD studies, actually dominate the heat flow, generating as much as 78% of the flux. These collective excitations are, however, much less significant if the atomic displacements are constrained in the lattice plane. Although relaxation is slow, we find graphene to be a regular (non-anomalous) heat conductor for sample sizes of order 40 μm and more.
Behavioral genetics and criminal responsibility at the courtroom.
Tatarelli, Roberto; Del Casale, Antonio; Tatarelli, Caterina; Serata, Daniele; Rapinesi, Chiara; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo
2014-04-01
Several questions arise from the recent use of behavioral genetic research data in the courtroom. Ethical issues concerning the influence of biological factors on human free will, must be considered when specific gene patterns are advocated to constrain court's judgment, especially regarding violent crimes. Aggression genetics studies are both difficult to interpret and inconsistent, hence, in the absence of a psychiatric diagnosis, genetic data are currently difficult to prioritize in the courtroom. The judge's probabilistic considerations in formulating a sentence must take into account causality, and the latter cannot be currently ensured by genetic data. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Model-Mapped RPA for Determining the Effective Coulomb Interaction
NASA Astrophysics Data System (ADS)
Sakakibara, Hirofumi; Jang, Seung Woo; Kino, Hiori; Han, Myung Joon; Kuroki, Kazuhiko; Kotani, Takao
2017-04-01
We present a new method to obtain a model Hamiltonian from first-principles calculations. The effective interaction contained in the model is determined on the basis of random phase approximation (RPA). In contrast to previous methods such as projected RPA and constrained RPA (cRPA), the new method named "model-mapped RPA" takes into account the long-range part of the polarization effect to determine the effective interaction in the model. After discussing the problems of cRPA, we present the formulation of the model-mapped RPA, together with a numerical test for the single-band Hubbard model of HgBa2CuO4.
Topical use of dexpanthenol: a 70th anniversary article.
Proksch, Ehrhardt; de Bony, Raymond; Trapp, Sonja; Boudon, Stéphanie
2017-12-01
Approximately 70 years ago, the first topical dexpanthenol-containing formulation (Bepanthen™ Ointment) has been developed. Nowadays, various topical dexpanthenol preparations exist, tailored according to individual requirements. Topical dexpanthenol has emerged as frequently used formulation in the field of dermatology and skin care. Various studies confirmed dexpanthenol's moisturizing and skin barrier enhancing potential. It prevents skin irritation, stimulates skin regeneration and promotes wound healing. Two main directions in the use of topical dexpanthenol-containing formulations have therefore been pursued: as skin moisturizer/skin barrier restorer and as facilitator of wound healing. This 70th anniversary paper reviews studies with topical dexpanthenol in skin conditions where it is most frequently used. Although discovered decades ago, the exact mechanisms of action of dexpanthenol have not been fully elucidated yet. With the adoption of new technologies, new light has been shed on dexpanthenol's mode of action at the molecular level. It appears that dexpanthenol increases the mobility of stratum corneum molecular components which are important for barrier function and modulates the expression of genes important for wound healing. This review will update readers on recent advances in this field.
Bachar, Michal; Mandelbaum, Amitai; Portnaya, Irina; Perlstein, Hadas; Even-Chen, Simcha; Barenholz, Yechezkel; Danino, Dganit
2012-06-10
β-casein is an amphiphilic protein that self-organizes into well-defined core-shell micelles. We developed these micelles as efficient nanocarriers for oral drug delivery. Our model drug is celecoxib, an anti-inflammatory hydrophobic drug utilized for treatment of rheumatoid arthritis and osteoarthritis, now also evaluated as a potent anticancer drug. This system is unique as it enables encapsulation loads >100-fold higher than other β-casein/drug formulations, and does not require additives as do other formulations that have high loadings. This is combined with the ability to lyophilize the formulation without a cryoprotectant, long-term physical and chemical stability of the resulting powder, and fully reversible reconstitution of the structures by rehydration. The dry dosage form, in which >95% of the drug is encapsulated, meets the daily dose. Cryo-TEM and DLS prove that drug encapsulation results in micelle swelling, and X-ray diffraction shows that the encapsulated drug is amorphous. Altogether, our novel dosage form is highly advantageous for oral administration. Copyright © 2012 Elsevier B.V. All rights reserved.
Yassin, Samy; Goodwin, Daniel J; Anderson, Andrew; Sibik, Juraj; Wilson, D Ian; Gladden, Lynn F; Zeitler, J Axel
2015-01-01
Disintegration performance was measured by analysing both water ingress and tablet swelling of pure microcrystalline cellulose (MCC) and in mixture with croscarmellose sodium using terahertz pulsed imaging (TPI). Tablets made from pure MCC with porosities of 10% and 15% showed similar swelling and transport kinetics: within the first 15 s, tablets had swollen by up to 33% of their original thickness and water had fully penetrated the tablet following Darcy flow kinetics. In contrast, MCC tablets with a porosity of 5% exhibited much slower transport kinetics, with swelling to only 17% of their original thickness and full water penetration reached after 100 s, dominated by case II transport kinetics. The effect of adding superdisintegrant to the formulation and varying the temperature of the dissolution medium between 20°C and 37°C on the swelling and transport process was quantified. We have demonstrated that TPI can be used to non-invasively analyse the complex disintegration kinetics of formulations that take place on timescales of seconds and is a promising tool to better understand the effect of dosage form microstructure on its performance. By relating immediate-release formulations to mathematical models used to describe controlled release formulations, it becomes possible to use this data for formulation design. © 2015 The Authors. Journal of Pharmaceutical Sciences published by Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 104:3440–3450, 2015 PMID:26073446
pH-independent immediate release polymethacrylate formulations--an observational study.
Claeys, Bart; Vandeputte, Reinout; De Geest, Bruno G; Remon, Jean Paul; Vervaet, Chris
2016-01-01
Using Eudragit® E PO (EudrE) as a polymethacrylate carrier, the aim of the study was to develop a pH-independent dosage form containing ibuprofen (IBP) as an active compound via chemical modification of the polymer (i.e. quaternization of amine function) or via the addition of dicarboxylic acids (succinic, glutaric and adipic acid) to create a pH micro-environment during dissolution. Biconvex tablets (diameter: 10 mm; height: 5 mm) were produced via hot melt extrusion and injection molding. In vitro dissolution experiments revealed that a minimum of 25% of quaternization was sufficient to partially (up to pH 5) eliminate the pH-dependent effect of the EudrE/IBP formulation. The addition of dicarboxylic acids did not alter IBP release in a pH 1 and 3 medium as the dimethyl amino groups of EudrE are already fully protonated, while in a pH 5 solvent IBP release was significantly improved (cf. from 0% to 92% release after 1 h dissolution experiments upon the addition of 20 wt.% succinic acid). Hence, both approaches resulted in a pH-independent (up to pH 5) immediate release formulation. However, the presence of a positively charged polymer induced stability issues (recrystallization of API) and the formulations containing dicarboxylic acids were classified as mechanically unstable. Hence, further research is needed to obtain a pH-independent immediate release formulation while using EudrE as a polmethacrylate carrier.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Statistical optimisation techniques in fatigue signal editing problem
NASA Astrophysics Data System (ADS)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.
2015-02-01
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.
Statistical optimisation techniques in fatigue signal editing problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window andmore » fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.« less
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
Design Tools for Cost-Effective Implementation of Planetary Protection Requirements
NASA Technical Reports Server (NTRS)
Hamlin, Louise; Belz, Andrea; Evans, Michael; Kastner, Jason; Satter, Celeste; Spry, Andy
2006-01-01
Since the Viking missions to Mars in the 1970s, accounting for the costs associated with planetary protection implementation has not been done systematically during early project formulation phases, leading to unanticipated costs during subsequent implementation phases of flight projects. The simultaneous development of more stringent planetary protection requirements, resulting from new knowledge about the limits of life on Earth, together with current plans to conduct life-detection experiments on a number of different solar system target bodies motivates a systematic approach to integrating planetary protection requirements and mission design. A current development effort at NASA's Jet Propulsion Laboratory is aimed at integrating planetary protection requirements more fully into the early phases of mission architecture formulation and at developing tools to more rigorously predict associated cost and schedule impacts of architecture options chosen to meet planetary protection requirements.
A semi-implicit finite difference model for three-dimensional tidal circulation,
Casulli, V.; Cheng, R.T.
1992-01-01
A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is presented. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that in the absence of horizontal viscosity the resulting algorithm is unconditionally stable at a minimal computational cost. When only one vertical layer is specified this method reduces, as a particular case, to a semi-implicit scheme for the solutions of the corresponding two-dimensional shallow water equations. The resulting two- and three-dimensional algorithm is fast, accurate and mass conservative. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers.
TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow
Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.
1993-01-01
A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.
Immunological Evaluation of Recent MUC1 Glycopeptide Cancer Vaccines
Hossain, Md Kamal; Wall, Katherine A.
2016-01-01
Aberrantly glycosylated mucin 1 (MUC1) is a recognized tumor-specific antigen on epithelial cell tumors. A wide variety of MUC1 glycopeptide anti-cancer vaccines have been formulated by many research groups. Some researchers have used MUC1 alone as an immunogen whereas other groups used different antigenic carrier proteins such as bovine serum albumin or keyhole limpet hemocyanin for conjugation with MUC1 glycopeptide. A variety of adjuvants have been used with MUC1 glycopeptides to improve their immunogenicity. Fully synthetic multicomponent vaccines have been synthesized by incorporating different T helper cell epitopes and Toll-like receptor agonists. Some vaccine formulations utilized liposomes or nanoparticles as vaccine delivery systems. In this review, we discuss the immunological evaluation of different conjugate or synthetic MUC1 glycopeptide vaccines in different tumor or mouse models that have been published since 2012. PMID:27472370
Development of an integrated BEM approach for hot fluid structure interaction
NASA Technical Reports Server (NTRS)
Dargush, Gary F.; Banerjee, Prasanta K.; Dunn, Michael G.
1988-01-01
Significant progress was made toward the goal of developing a general purpose boundary element method for hot fluid-structure interaction. For the solid phase, a boundary-only formulation was developed and implemented for uncoupled transient thermoelasticity in two dimensions. The elimination of volume discretization not only drastically reduces required modeling effort, but also permits unconstrained variation of the through-the-thickness temperature distribution. Meanwhile, for the fluids, fundamental solutions were derived for transient incompressible and compressible flow in the absence of the convective terms. Boundary element formulations were developed and described. For the incompressible case, the necessary kernal functions, under transient and steady-state conditions, were derived and fully implemented into a general purpose, multi-region boundary element code. Several examples were examined to study the suitability and convergence characteristics of the various algorithms.
Open Quantum Walks and Dissipative Quantum Computing
NASA Astrophysics Data System (ADS)
Petruccione, Francesco
2012-02-01
Open Quantum Walks (OQWs) have been recently introduced as quantum Markov chains on graphs [S. Attal, F. Petruccione, C. Sabot, and I. Sinayskiy, E-print: http://hal.archives-ouvertes.fr/hal-00581553/fr/]. The formulation of the OQWs is exclusively based upon the non-unitary dynamics induced by the environment. It will be shown that OQWs are a very useful tool for the formulation of dissipative quantum computing and quantum state preparation. In particular, it will be shown how to implement single qubit gates and the CNOT gate as OQWs on fully connected graphs. Also, OQWS make possible the dissipative quantum state preparation of arbitrary single qubit states and of all two-qubit Bell states. Finally, it will be shown how to reformulate efficiently a discrete time version of dissipative quantum computing in the language of OQWs.
Hadad, Ghada M; Abdel-Salam, Randa A; Emara, Samy
2011-12-01
Application of a sensitive and rapid flow injection analysis (FIA) method for determination of topiramate, piracetam, and levetiracetam in pharmaceutical formulations has been investigated. The method is based on the reaction with ortho-phtalaldehyde and 2-mercaptoethanol in a basic buffer and measurement of absorbance at 295 nm under flow conditions. Variables affecting the determination such as sample injection volume, pH, ionic strength, reagent concentrations, flow rate of reagent and other FIA parameters were optimized to produce the most sensitive and reproducible results using a quarter-fraction factorial design, for five factors at two levels. Also, the method has been optimized and fully validated in terms of linearity and range, limit of detection and quantitation, precision, selectivity and accuracy. The method was successfully applied to the analysis of pharmaceutical preparations.
Hong, Shiqi; Shen, Shoucang; Tan, David Cheng Thiam; Ng, Wai Kiong; Liu, Xueming; Chia, Leonard S O; Irwan, Anastasia W; Tan, Reginald; Nowak, Steven A; Marsh, Kennan; Gokhale, Rajeev
2016-01-01
Encapsulation of drugs in mesoporous silica using co-spray drying process has been recently explored as potential industrial method. However, the impact of spray drying on manufacturability, physiochemical stability and bioavailability in relation to conventional drug load processes are yet to be fully investigated. Using a 2(3) factorial design, this study aims to investigate the effect of drug-loading process (co-spray drying and solvent impregnation), mesoporous silica pore size (SBA-15, 6.5 nm and MCM-41, 2.5 nm) and percentage drug load (30% w/w and 50% w/w) on material properties, crystallinity, physicochemical stability, release profiles and bioavailability of fenofibrate (FEN) loaded into mesoporous silica. From the scanning electronic microscopy (SEM) images, powder X-ray diffraction and Differential scanning calorimetry measurements, it is indicated that the co-spray drying process was able to load up to 50% (w/w) FEN in amorphous form onto the mesoporous silica as compared to the 30% (w/w) for solvent impregnation. The in vitro dissolution rate of the co-spray dried formulations was also significantly (p = 0.044) better than solvent impregnated formulations at the same drug loading. Six-month accelerated stability test at 40 °C/75 RH in open dish indicated excellent physical and chemical stability of formulations prepared by both methods. The amorphous state of FEN and the enhanced dissolution profiles were well preserved, and very low levels of degradation were detected after storage. The dog data for the three selected co-spray-dried formulations revealed multiple fold increment in FEN bioavailability compared to the reference crystalline FEN. These results validate the viability of co-spray-dried mesoporous silica formulations with high amorphous drug load as potential drug delivery systems for poorly water soluble drugs.
Cirri, Marzia; Roghi, Alessandra; Valleri, Maurizio; Mura, Paola
2016-07-01
The aim of this work was to develop effective fast-dissolving tablet formulations of glyburide, endowed with improved dissolution and technological properties, investigating the actual effectiveness of the Solid-Self MicroEmulsifying Drug Delivery System (S-SMEDDS) approach. An initial screening aimed to determine the solubility of the drug in different oils, Surfactants and CoSurfactants allowed the selection of the most suitable components for liquid SMEDDS, whose relative amounts were defined by the construction of pseudo-ternary phase diagrams. The selected liquid SMEDDS formulations (Capyol 90 as oil, Tween 20 as Surfactant and Glycofurol or Transcutol as CoSurfactant) were converted into Solid-SMEDDS, by adsorbing them onto Neusilin (1:1 and 1:0.8w/w S-SMEDDS:carrier), and fully characterized in terms of solid state (DSC and X-ray powder diffraction), morphological (ESEM) and dissolution properties, particle size and reconstitution ability. Finally, the 1:1 S-SMEDDS containing Glycofurol as CoSurfactant, showing the best performance, was selected to prepare two final tablet formulations. The ratio test (t10 min ratio and DE60 ratio) and pair-wise procedures (difference (f1) and similarity (f2) factors) highlighted the similarity of the new developed tablets and the marked difference between their drug dissolution profiles and those of formulations based on the micronized drug. The S-SMEDDS approach allowed to develop fast-dissolving tablets of glyburide, endowed with good technological properties and able to achieve the complete drug dissolution in a time ranging from 10 to 15min, depending on the formulation composition. Copyright © 2016 Elsevier B.V. All rights reserved.
Coupled nonlinear aeroelasticity and flight dynamics of fully flexible aircraft
NASA Astrophysics Data System (ADS)
Su, Weihua
This dissertation introduces an approach to effectively model and analyze the coupled nonlinear aeroelasticity and flight dynamics of highly flexible aircraft. A reduced-order, nonlinear, strain-based finite element framework is used, which is capable of assessing the fundamental impact of structural nonlinear effects in preliminary vehicle design and control synthesis. The cross-sectional stiffness and inertia properties of the wings are calculated along the wing span, and then incorporated into the one-dimensional nonlinear beam formulation. Finite-state unsteady subsonic aerodynamics is used to compute airloads along lifting surfaces. Flight dynamic equations are then introduced to complete the aeroelastic/flight dynamic system equations of motion. Instead of merely considering the flexibility of the wings, the current work allows all members of the vehicle to be flexible. Due to their characteristics of being slender structures, the wings, tail, and fuselage of highly flexible aircraft can be modeled as beams undergoing three dimensional displacements and rotations. New kinematic relationships are developed to handle the split beam systems, such that fully flexible vehicles can be effectively modeled within the existing framework. Different aircraft configurations are modeled and studied, including Single-Wing, Joined-Wing, Blended-Wing-Body, and Flying-Wing configurations. The Lagrange Multiplier Method is applied to model the nodal displacement constraints at the joint locations. Based on the proposed models, roll response and stability studies are conducted on fully flexible and rigidized models. The impacts of the flexibility of different vehicle members on flutter with rigid body motion constraints, flutter in free flight condition, and roll maneuver performance are presented. Also, the static stability of the compressive member of the Joined-Wing configuration is studied. A spatially-distributed discrete gust model is incorporated into the time simulation of the framework. Gust responses of the Flying-Wing configuration subject to stall effects are investigated. A bilinear torsional stiffness model is introduced to study the skin wrinkling due to large bending curvature of the Flying-Wing. The numerical studies illustrate the improvements of the existing reduced-order formulation with new capabilities of both structural modeling and coupled aeroelastic and flight dynamic analysis of fully flexible aircraft.
Hua, Shanshan; Liang, Jie; Zeng, Guangming; Xu, Min; Zhang, Chang; Yuan, Yujie; Li, Xiaodong; Li, Ping; Liu, Jiayu; Huang, Lu
2015-11-15
Groundwater management in China has been facing challenges from both climate change and urbanization and is considered as a national priority nowadays. However, unprecedented uncertainty exists in future scenarios making it difficult to formulate management planning paradigms. In this paper, we apply modern portfolio theory (MPT) to formulate an optimal stage investment of groundwater contamination remediation in China. This approach generates optimal weights of investment to each stage of the groundwater management and helps maximize expected return while minimizing overall risk in the future. We find that the efficient frontier of investment displays an upward-sloping shape in risk-return space. The expected value of groundwater vulnerability index increases from 0.6118 to 0.6230 following with the risk of uncertainty increased from 0.0118 to 0.0297. If management investment is constrained not to exceed certain total cost until 2050 year, the efficient frontier could help decision makers make the most appropriate choice on the trade-off between risk and return. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jang, Hae-Won; Ih, Jeong-Guon
2013-03-01
The time domain boundary element method (TBEM) to calculate the exterior sound field using the Kirchhoff integral has difficulties in non-uniqueness and exponential divergence. In this work, a method to stabilize TBEM calculation for the exterior problem is suggested. The time domain CHIEF (Combined Helmholtz Integral Equation Formulation) method is newly formulated to suppress low order fictitious internal modes. This method constrains the surface Kirchhoff integral by forcing the pressures at the additional interior points to be zero when the shortest retarded time between boundary nodes and an interior point elapses. However, even after using the CHIEF method, the TBEM calculation suffers the exponential divergence due to the remaining unstable high order fictitious modes at frequencies higher than the frequency limit of the boundary element model. For complete stabilization, such troublesome modes are selectively adjusted by projecting the time response onto the eigenspace. In a test example for a transiently pulsating sphere, the final average error norm of the stabilized response compared to the analytic solution is 2.5%.
Li, Zhijun; Ge, Shuzhi Sam; Liu, Sibang
2014-08-01
This paper investigates optimal feet forces' distribution and control of quadruped robots under external disturbance forces. First, we formulate a constrained dynamics of quadruped robots and derive a reduced-order dynamical model of motion/force. Consider an external wrench on quadruped robots; the distribution of required forces and moments on the supporting legs of a quadruped robot is handled as a tip-point force distribution and used to equilibrate the external wrench. Then, a gradient neural network is adopted to deal with the optimized objective function formulated as to minimize this quadratic objective function subjected to linear equality and inequality constraints. For the obtained optimized tip-point force and the motion of legs, we propose the hybrid motion/force control based on an adaptive neural network to compensate for the perturbations in the environment and approximate feedforward force and impedance of the leg joints. The proposed control can confront the uncertainties including approximation error and external perturbation. The verification of the proposed control is conducted using a simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamoorthy, Sriram; Daily, Jeffrey A.; Vishnu, Abhinav
2015-11-01
Global Arrays (GA) is a distributed-memory programming model that allows for shared-memory-style programming combined with one-sided communication, to create a set of tools that combine high performance with ease-of-use. GA exposes a relatively straightforward programming abstraction, while supporting fully-distributed data structures, locality of reference, and high-performance communication. GA was originally formulated in the early 1990’s to provide a communication layer for the Northwest Chemistry (NWChem) suite of chemistry modeling codes that was being developed concurrently.
Explicitly covariant dispersion relations and self-induced transparency
NASA Astrophysics Data System (ADS)
Mahajan, S. M.; Asenjo, Felipe A.
2017-02-01
Explicitly covariant dispersion relations for a variety of plasma waves in unmagnetized and magnetized plasmas are derived in a systematic manner from a fully covariant plasma formulation. One needs to invoke relatively little known invariant combinations constructed from the ambient electromagnetic fields and the wave vector to accomplish the program. The implication of this work applied to the self-induced transparency effect is discussed. Some problems arising from the inconsistent use of relativity are pointed out.
A Conformal, Fully-Conservative Approach for Predicting Blast Effects on Ground Vehicles
2014-02-09
hydrocode. Again, a very detailed model of the pick-up truck was used. The results demonstrated that the soil type and moisture content affect both...dynamics code with the capability to model soil and blast using a multi- species formulation with advanced equations of state. The two-way coupling...of the blast, the effects of soil , which could have a high water content, must also be included. An attractive strategy, which is much less costly
Chen, Yongsheng; Persaud, Bhagwant
2014-09-01
Crash modification factors (CMFs) for road safety treatments are developed as multiplicative factors that are used to reflect the expected changes in safety performance associated with changes in highway design and/or the traffic control features. However, current CMFs have methodological drawbacks. For example, variability with application circumstance is not well understood, and, as important, correlation is not addressed when several CMFs are applied multiplicatively. These issues can be addressed by developing safety performance functions (SPFs) with components of crash modification functions (CM-Functions), an approach that includes all CMF related variables, along with others, while capturing quantitative and other effects of factors and accounting for cross-factor correlations. CM-Functions can capture the safety impact of factors through a continuous and quantitative approach, avoiding the problematic categorical analysis that is often used to capture CMF variability. There are two formulations to develop such SPFs with CM-Function components - fully specified models and hierarchical models. Based on sample datasets from two Canadian cities, both approaches are investigated in this paper. While both model formulations yielded promising results and reasonable CM-Functions, the hierarchical model was found to be more suitable in retaining homogeneity of first-level SPFs, while addressing CM-Functions in sub-level modeling. In addition, hierarchical models better capture the correlations between different impact factors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Frictional strength of wet and dry montmorillonite
Morrow, Carolyn A.; Moore, Diane E.; Lockner, David A.
2017-01-01
Montmorillonite is a common mineral in fault zones, and its low strength relative to other common gouge minerals is important in many models of fault rheology. However, the coefficient of friction, μ, varies with degree of saturation and is not well constrained in the literature due to the difficulty of establishing fully drained or fully dried states in the laboratory. We measured μ of both saturated and oven-dried montmorillonite at normal stresses up to 700 MPa. Care was taken to shear saturated samples slowly enough to avoid pore fluid overpressure. For saturated samples, μ increased from 0.10 to 0.28 with applied effective normal stress, while for dry samples μ decreased from 0.78 to 0.45. The steady state rate dependence of friction, (a − b), was positive, promoting stable sliding. The wide disparity in reported frictional strengths can be attributed to experimental procedures that promote differing degrees of partial saturation or overpressured pore fluid conditions.
Ackermann, Uwe; Lewis, Jason S; Young, Kenneth; Morris, Michael J; Weickhardt, Andrew; Davis, Ian D; Scott, Andrew M
2016-08-01
Imaging of androgen receptor expression in prostate cancer using F-18 FDHT is becoming increasingly popular. With the radiolabelling precursor now commercially available, developing a fully automated synthesis of [(18) F] FDHT is important. We have fully automated the synthesis of F-18 FDHT using the iPhase FlexLab module using only commercially available components. Total synthesis time was 90 min, radiochemical yields were 25-33% (n = 11). Radiochemical purity of the final formulation was > 99% and specific activity was > 18.5 GBq/µmol for all batches. This method can be up-scaled as desired, thus making it possible to study multiple patients in a day. Furthermore, our procedure uses 4 mg of precursor only and is therefore cost-effective. The synthesis has now been validated at Austin Health and is currently used for [(18) F]FDHT studies in patients. We believe that this method can easily adapted by other modules to further widen the availability of [(18) F]FDHT. Copyright © 2016 John Wiley & Sons, Ltd.
Towards spatially constrained gust models
NASA Astrophysics Data System (ADS)
Bos, René; Bierbooms, Wim; van Bussel, Gerard
2014-06-01
With the trend of moving towards 10-20 MW turbines, rotor diameters are growing beyond the size of the largest turbulent structures in the atmospheric boundary layer. As a consequence, the fully uniform transients that are commonly used to predict extreme gust loads are losing their connection to reality and may lead to gross overdimensioning. More suiting would be to represent gusts by advecting air parcels and posing certain physical constraints on size and position. However, this would introduce several new degrees of freedom that significantly increase the computational burden of extreme load prediction. In an attempt to elaborate on the costs and benefits of such an approach, load calculations were done on the DTU 10 MW reference turbine where a single uniform gust shape was given various spatial dimensions with the transverse wavelength ranging up to twice the rotor diameter (357 m). The resulting loads displayed a very high spread, but remained well under the level of a uniform gust. Moving towards spatially constrained gust models would therefore yield far less conservative, though more realistic predictions at the cost of higher computation time.
Effective theory of flavor for Minimal Mirror Twin Higgs
NASA Astrophysics Data System (ADS)
Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke
2017-10-01
We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ɛ^{n_i} for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ɛ^' {n}_i} , so that spontaneous breaking of the parity P arises from a single parameter ɛ'/ɛ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i , including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ɛ'/ɛ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. In each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.
Buchmueller, Oliver; Malik, Sarah A; McCabe, Christopher; Penning, Bjoern
2015-10-30
The monojet search, looking for events involving missing transverse energy (E_{T}) plus one or two jets, is the most prominent collider dark matter search. We show that multijet searches, which look for E_{T} plus two or more jets, are significantly more sensitive than the monojet search for pseudoscalar- and scalar-mediated interactions. We demonstrate this in the context of a simplified model with a pseudoscalar interaction that explains the excess in GeV energy gamma rays observed by the Fermi Large Area Telescope. We show that multijet searches already constrain a pseudoscalar interpretation of the excess in much of the parameter space where the mass of the mediator M_{A} is more than twice the dark matter mass m_{DM}. With the forthcoming run of the Large Hadron Collider at higher energies, the remaining regions of the parameter space where M_{A}>2m_{DM} will be fully explored. Furthermore, we highlight the importance of complementing the monojet final state with multijet final states to maximize the sensitivity of the search for the production of dark matter at colliders.
NASA Technical Reports Server (NTRS)
Probst, D.; Jensen, L.
1991-01-01
Delay-insensitive VLSI systems have a certain appeal on the ground due to difficulties with clocks; they are even more attractive in space. We answer the question, is it possible to control state explosion arising from various sources during automatic verification (model checking) of delay-insensitive systems? State explosion due to concurrency is handled by introducing a partial-order representation for systems, and defining system correctness as a simple relation between two partial orders on the same set of system events (a graph problem). State explosion due to nondeterminism (chiefly arbitration) is handled when the system to be verified has a clean, finite recurrence structure. Backwards branching is a further optimization. The heart of this approach is the ability, during model checking, to discover a compact finite presentation of the verified system without prior composition of system components. The fully-implemented POM verification system has polynomial space and time performance on traditional asynchronous-circuit benchmarks that are exponential in space and time for other verification systems. We also sketch the generalization of this approach to handle delay-constrained VLSI systems.
Retrodeformation and muscular reconstruction of ornithomimosaurian dinosaur crania
Rayfield, Emily J.
2015-01-01
Ornithomimosaur dinosaurs evolved lightweight, edentulous skulls that possessed keratinous rhamphothecae. Understanding the anatomy of these taxa allows for a greater understanding of “ostrich-mimic” dinosaurs and character change during theropod dinosaur evolution. However, taphonomic processes during fossilisation often distort fossil remains. Retrodeformation offers a means by which to recover a hypothesis of the original anatomy of the specimen, and 3D scanning technologies present a way to constrain and document the retrodeformation process. Using computed tomography (CT) scan data, specimen specific retrodeformations were performed on three-dimensionally preserved but taphonomically distorted skulls of the deinocheirid Garudimimus brevipes Barsbold, 1981 and the ornithomimids Struthiomimus altus Lambe, 1902 and Ornithomimus edmontonicus Sternberg, 1933. This allowed for a reconstruction of the adductor musculature, which was then mapped onto the crania, from which muscle mechanical advantage and bite forces were calculated pre- and post-retrodeformation. The extent of the rhamphotheca was varied in each taxon to represent morphologies found within modern Aves. Well constrained retrodeformation allows for increased confidence in anatomical and functional analysis of fossil specimens and offers an opportunity to more fully understand the soft tissue anatomy of extinct taxa. PMID:26213655
Design forms of total knee replacement.
Walker, P S; Sathasivam, S
2000-01-01
The starting point of this article is a general design criterion applicable to all types of total knee replacement. This criterion is then expanded upon to provide more specifics of the required kinematics, and the forces which the total knee must sustain. A characteristic which differentiates total knees is the amount of constraint which is required, and whether the constraint is translational or rotational. The different forms of total knee replacement are described in terms of these constraints, starting with the least constrained unicompartments to the almost fully constrained fixed and rotating hinges. Much attention is given to the range of designs in between these two extreme types, because they constitute by far the largest in usage. This category includes condylar replacements where the cruciate ligaments are preserved or resected, posterior cruciate substituting designs and mobile bearing knees. A new term, 'guided motion knees', is applied to the growing number of designs which control the kinematics by the use of intercondylar cams or specially shaped and even additional bearing surfaces. The final section deals with the selection of an appropriate design of total knee for specific indications based on the design characteristics.
NASA Astrophysics Data System (ADS)
DeCarlo, Thomas M.; Holcomb, Michael; McCulloch, Malcolm T.
2018-05-01
The isotopic and elemental systematics of boron in aragonitic coral skeletons have recently been developed as a proxy for the carbonate chemistry of the coral extracellular calcifying fluid. With knowledge of the boron isotopic fractionation in seawater and the B/Ca partition coefficient (KD) between aragonite and seawater, measurements of coral skeleton δ11B and B/Ca can potentially constrain the full carbonate system. Two sets of abiogenic aragonite precipitation experiments designed to quantify KD have recently made possible the application of this proxy system. However, while different KD formulations have been proposed, there has not yet been a comprehensive analysis that considers both experimental datasets and explores the implications for interpreting coral skeletons. Here, we evaluate four potential KD formulations: three previously presented in the literature and one newly developed. We assess how well each formulation reconstructs the known fluid carbonate chemistry from the abiogenic experiments, and we evaluate the implications for deriving the carbonate chemistry of coral calcifying fluid. Three of the KD formulations performed similarly when applied to abiogenic aragonites precipitated from seawater and to coral skeletons. Critically, we find that some uncertainty remains in understanding the mechanism of boron elemental partitioning between aragonite and seawater, and addressing this question should be a target of additional abiogenic precipitation experiments. Despite this, boron systematics can already be applied to quantify the coral calcifying fluid carbonate system, although uncertainties associated with the proxy system should be carefully considered for each application. Finally, we present a user-friendly computer code that calculates coral calcifying fluid carbonate chemistry, including propagation of uncertainties, given inputs of boron systematics measured in coral skeleton.
Cryopreservation of pluripotent stem cell aggregates in defined protein-free formulation.
Sart, Sébastien; Ma, Teng; Li, Yan
2013-01-01
Cultivation of undifferentiated pluripotent stem cells (PSCs) as aggregates has emerged as an efficient culture configuration, enabling rapid and controlled large scale expansion. Aggregate-based PSC cryopreservation facilitates the integrated process of cell expansion and cryopreservation, but its feasibility has not been demonstrated. The goals of current study are to assess the suitability of cryopreserving intact mouse embryonic stem cell (mESC) aggregates and investigate the effects of aggregate size and the formulation of cryopreservation solution on mESC survival and recovery. The results demonstrated the size-dependent cell survival and recovery of intact aggregates. In particular, the generation of reactive oxygen species (ROS) and caspase activation were reduced for small aggregates (109 ± 55 μm) compared to medium (245 ± 77 μm) and large (365 ± 141 μm) ones, leading to the improved cell recovery. In addition, a defined protein-free formulation was tested and found to promote the aggregate survival, eliminating the cell exposure to animal serum. The cryopreserved aggregates also maintained the pluripotent markers and the differentiation capacity into three-germ layers after thawing. In summary, the cryopreservation of small PSC aggregates in a defined protein-free formulation was shown to be a suitable approach toward a fully integrated expansion and cryopreservation process at large scale. Copyright © 2012 American Institute of Chemical Engineers (AIChE).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong; Andrs, David; Martineau, Richard Charles
This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less
Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2006-09-01
This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.
Yassin, Samy; Goodwin, Daniel J; Anderson, Andrew; Sibik, Juraj; Wilson, D Ian; Gladden, Lynn F; Zeitler, J Axel
2015-10-01
Disintegration performance was measured by analysing both water ingress and tablet swelling of pure microcrystalline cellulose (MCC) and in mixture with croscarmellose sodium using terahertz pulsed imaging (TPI). Tablets made from pure MCC with porosities of 10% and 15% showed similar swelling and transport kinetics: within the first 15 s, tablets had swollen by up to 33% of their original thickness and water had fully penetrated the tablet following Darcy flow kinetics. In contrast, MCC tablets with a porosity of 5% exhibited much slower transport kinetics, with swelling to only 17% of their original thickness and full water penetration reached after 100 s, dominated by case II transport kinetics. The effect of adding superdisintegrant to the formulation and varying the temperature of the dissolution medium between 20°C and 37°C on the swelling and transport process was quantified. We have demonstrated that TPI can be used to non-invasively analyse the complex disintegration kinetics of formulations that take place on timescales of seconds and is a promising tool to better understand the effect of dosage form microstructure on its performance. By relating immediate-release formulations to mathematical models used to describe controlled release formulations, it becomes possible to use this data for formulation design. © 2015 The Authors. Journal of Pharmaceutical Sciences published by Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 104:3440-3450, 2015. © 2015 The Authors. Journal of Pharmaceutical Sciences published by Wiley Periodicals, Inc. and the American Pharmacists Association.
Primal-dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration
NASA Astrophysics Data System (ADS)
Kovtunenko, V. A.
2006-10-01
Based on a level-set description of a crack moving with a given velocity, the problem of shape perturb-ation of the crack is considered. Nonpenetration conditions are imposed between opposite crack surfaces which result in a constrained minimization problem describing equilibrium of a solid with the crack. We suggest a minimax formulation of the state problem thus allowing curvilinear (nonplanar) cracks for the consideration. Utilizing primal-dual methods of shape sensitivity analysis we obtain the general formula for a shape derivative of the potential energy, which describes an energy-release rate for the curvilinear cracks. The conditions sufficient to rewrite it in the form of a path-independent integral (J-integral) are derived.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Monasson, Rémi
2015-09-01
The maximum entropy principle (MEP) is a very useful working hypothesis in a wide variety of inference problems, ranging from biological to engineering tasks. To better understand the reasons of the success of MEP, we propose a statistical-mechanical formulation to treat the space of probability distributions constrained by the measures of (experimental) observables. In this paper we first review the results of a detailed analysis of the simplest case of randomly chosen observables. In addition, we investigate by numerical and analytical means the case of smooth observables, which is of practical relevance. Our preliminary results are presented and discussed with respect to the efficiency of the MEP.
NASA Technical Reports Server (NTRS)
Bayo, Eduardo; Ledesma, Ragnar
1993-01-01
A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.
An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1992-01-01
The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.
A time-parallel approach to strong-constraint four-dimensional variational data assimilation
NASA Astrophysics Data System (ADS)
Rao, Vishwas; Sandu, Adrian
2016-05-01
A parallel-in-time algorithm based on an augmented Lagrangian approach is proposed to solve four-dimensional variational (4D-Var) data assimilation problems. The assimilation window is divided into multiple sub-intervals that allows parallelization of cost function and gradient computations. The solutions to the continuity equations across interval boundaries are added as constraints. The augmented Lagrangian approach leads to a different formulation of the variational data assimilation problem than the weakly constrained 4D-Var. A combination of serial and parallel 4D-Vars to increase performance is also explored. The methodology is illustrated on data assimilation problems involving the Lorenz-96 and the shallow water models.
NASA Technical Reports Server (NTRS)
Grimm, Robert E.; Solomon, Sean C.
1988-01-01
Models for the viscous relaxation of impact crater topography are used to constrain the crustal thickness (H) and the mean lithospheric thermal gradient beneath the craters on Venus. A general formulation for gravity-driven flow in a linearly viscous fluid has been obtained which incorporates the densities and temperature-dependent effective viscosities of distinct crust and mantle layers. An upper limit to the crustal volume of Venus of 10 to the 10th cu km is obtained which implies either that the average rate of crustal generation has been much smaller on Venus than on earth or that some form of crustal recycling has occurred on Venus.
NASA Astrophysics Data System (ADS)
Pepi, John W.
2017-08-01
Thermally induced stress is readily calculated for linear elastic material properties using Hooke's law in which, for situations where expansion is constrained, stress is proportional to the product of the material elastic modulus and its thermal strain. When material behavior is nonlinear, one needs to make use of nonlinear theory. However, we can avoid that complexity in some situations. For situations in which both elastic modulus and coefficient of thermal expansion vary with temperature, solutions can be formulated using secant properties. A theoretical approach is thus presented to calculate stresses for nonlinear, neo-Hookean, materials. This is important for high acuity optical systems undergoing large temperature extremes.
NASA Astrophysics Data System (ADS)
Priya, B. Ganesh; Muthukumar, P.
2018-02-01
This paper deals with the trajectory controllability for a class of multi-order fractional linear systems subject to a constant delay in state vector. The solution for the coupled fractional delay differential equation is established by the Mittag-Leffler function. The necessary and sufficient condition for the trajectory controllability is formulated and proved by the generalized Gronwall's inequality. The approximate trajectory for the proposed system is obtained through the shifted Jacobi operational matrix method. The numerical simulation of the approximate solution shows the theoretical results. Finally, some remarks and comments on the existing results of constrained controllability for the fractional dynamical system are also presented.
NASA Astrophysics Data System (ADS)
Algarray, A. F. A.; Jun, H.; Mahdi, I.-E. M.
2017-11-01
The effects of the end conditions of cross-ply laminated composite beams on their dimensionless natural frequencies of free vibration is investigated. The problem is analyzed and solved by using the energy approach, which is formulated by a finite element model. Various end conditions of beams are used. Each beam has either movable ends or immovable ends. Numerical results are verified by comparisons with other relevant works. It is found that more constrained beams have higher values of natural frequencies of transverse vibration. The values of the natural frequencies of longitudinal modes are found to be the same for all beams with movable ends because they are generated by longitudinal movements only.
Global Optimization of N-Maneuver, High-Thrust Trajectories Using Direct Multiple Shooting
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob A.; Ellison, Donald H.
2016-01-01
The performance of impulsive, gravity-assist trajectories often improves with the inclusion of one or more maneuvers between flybys. However, grid-based scans over the entire design space can become computationally intractable for even one deep-space maneuver, and few global search routines are capable of an arbitrary number of maneuvers. To address this difficulty a trajectory transcription allowing for any number of maneuvers is developed within a multi-objective, global optimization framework for constrained, multiple gravity-assist trajectories. The formulation exploits a robust shooting scheme and analytic derivatives for computational efficiency. The approach is applied to several complex, interplanetary problems, achieving notable performance without a user-supplied initial guess.
Diffuse-Interface Methods in Fluid Mechanics
NASA Technical Reports Server (NTRS)
Anderson, D. M.; McFadden, G. B.; Wheeler, A. A.
1997-01-01
The authors review the development of diffuse-interface models of hydrodynamics and their application to a wide variety of interfacial phenomena. The authors discuss the issues involved in formulating diffuse-interface models for single-component and binary fluids. Recent applications and computations using these models are discussed in each case. Further, the authors address issues including sharp-interface analyses that relate these models to the classical free-boundary problem, related computational approaches to describe interfacial phenomena, and related approaches describing fully-miscible fluids.
NASA Astrophysics Data System (ADS)
Kumari, Vandana; Kumar, Ayush; Saxena, Manoj; Gupta, Mridula
2018-01-01
The sub-threshold model formulation of Gaussian Doped Double Gate JunctionLess (GD-DG-JL) FET including source/drain depletion length is reported in the present work under the assumption that the ungated regions are fully depleted. To provide deeper insight into the device performance, the impact of gaussian straggle, channel length, oxide and channel thickness and high-k gate dielectric has been studied using extensive TCAD device simulation.
Directed polymers versus directed percolation
NASA Astrophysics Data System (ADS)
Halpin-Healy, Timothy
1998-10-01
Universality plays a central role within the rubric of modern statistical mechanics, wherein an insightful continuum formulation rises above irrelevant microscopic details, capturing essential scaling behaviors. Nevertheless, occasions do arise where the lattice or another discrete aspect can constitute a formidable legacy. Directed polymers in random media, along with its close sibling, directed percolation, provide an intriguing case in point. Indeed, the deep blood relation between these two models may have sabotaged past efforts to fully characterize the Kardar-Parisi-Zhang universality class, to which the directed polymer belongs.
Origin and Evolutionary Alteration of the Mitochondrial Import System in Eukaryotic Lineages.
Fukasawa, Yoshinori; Oda, Toshiyuki; Tomii, Kentaro; Imai, Kenichiro
2017-07-01
Protein transport systems are fundamentally important for maintaining mitochondrial function. Nevertheless, mitochondrial protein translocases such as the kinetoplastid ATOM complex have recently been shown to vary in eukaryotic lineages. Various evolutionary hypotheses have been formulated to explain this diversity. To resolve any contradiction, estimating the primitive state and clarifying changes from that state are necessary. Here, we present more likely primitive models of mitochondrial translocases, specifically the translocase of the outer membrane (TOM) and translocase of the inner membrane (TIM) complexes, using scrutinized phylogenetic profiles. We then analyzed the translocases' evolution in eukaryotic lineages. Based on those results, we propose a novel evolutionary scenario for diversification of the mitochondrial transport system. Our results indicate that presequence transport machinery was mostly established in the last eukaryotic common ancestor, and that primitive translocases already had a pathway for transporting presequence-containing proteins. Moreover, secondary changes including convergent and migrational gains of a presequence receptor in TOM and TIM complexes, respectively, likely resulted from constrained evolution. The nature of a targeting signal can constrain alteration to the protein transport complex. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Canonical quantization of constrained systems and coadjoint orbits of Diff(S sup 1 )
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherer, W.M.
It is shown that Dirac's treatment of constrained Hamiltonian systems and Schwinger's action principle quantization lead to identical commutations relations. An explicit relation between the Lagrange multipliers in the action principle approach and the additional terms in the Dirac bracket is derived. The equivalence of the two methods is demonstrated in the case of the non-linear sigma model. Dirac's method is extended to superspace and this extension is applied to the chiral superfield. The Dirac brackets of the massive interacting chiral superfluid are derived and shown to give the correct commutation relations for the component fields. The Hamiltonian of themore » theory is given and the Hamiltonian equations of motion are computed. They agree with the component field results. An infinite sequence of differential operators which are covariant under the coadjoint action of Diff(S{sup 1}) and analogues to Hill's operator is constructed. They map conformal fields of negative integer and half-integer weight to their dual space. Some properties of these operators are derived and possible applications are discussed. The Korteweg-de Vries equation is formulated as a coadjoint orbit of Diff(S{sup 1}).« less
Ma, Jun; Chen, Si-Lu; Kamaldin, Nazir; Teo, Chek Sing; Tay, Arthur; Mamun, Abdullah Al; Tan, Kok Kiong
2017-11-01
The biaxial gantry is widely used in many industrial processes that require high precision Cartesian motion. The conventional rigid-link version suffers from breaking down of joints if any de-synchronization between the two carriages occurs. To prevent above potential risk, a flexure-linked biaxial gantry is designed to allow a small rotation angle of the cross-arm. Nevertheless, the chattering of control signals and inappropriate design of the flexure joint will possibly induce resonant modes of the end-effector. Thus, in this work, the design requirements in terms of tracking accuracy, biaxial synchronization, and resonant mode suppression are achieved by integrated optimization of the stiffness of flexures and PID controller parameters for a class of point-to-point reference trajectories with same dynamics but different steps. From here, an H 2 optimization problem with defined constraints is formulated, and an efficient iterative solver is proposed by hybridizing direct computation of constrained projection gradient and line search of optimal step. Comparative experimental results obtained on the testbed are presented to verify the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
An enhanced beam model for constrained layer damping and a parameter study of damping contribution
NASA Astrophysics Data System (ADS)
Xie, Zhengchao; Shepard, W. Steve, Jr.
2009-01-01
An enhanced analytical model is presented based on an extension of previous models for constrained layer damping (CLD) in beam-like structures. Most existing CLD models are based on the assumption that shear deformation in the core layer is the only source of damping in the structure. However, previous research has shown that other types of deformation in the core layer, such as deformations from longitudinal extension and transverse compression, can also be important. In the enhanced analytical model developed here, shear, extension, and compression deformations are all included. This model can be used to predict the natural frequencies and modal loss factors. The numerical study shows that compared to other models, this enhanced model is accurate in predicting the dynamic characteristics. As a result, the model can be accepted as a general computation model. With all three types of damping included and the formulation used here, it is possible to study the impact of the structure's geometry and boundary conditions on the relative contribution of each type of damping. To that end, the relative contributions in the frequency domain for a few sample cases are presented.
Microbial processing of carbon in hydrothermal systems (Invited)
NASA Astrophysics Data System (ADS)
LaRowe, D.; Amend, J. P.
2013-12-01
Microorganisms are known to be active in hydrothermal systems. They catalyze reactions that consume and produce carbon compounds as a result of their efforts to gain energy, grow and replace biomass. However, the rates of these processes, as well as the size of the active component of microbial populations, are poorly constrained in hydrothermal environments. In order to better characterize biogeochemical processes in these settings, a quantitative relationship between rates of microbial catalysis, energy supply and demand and population size is presented. Within this formulation, rates of biomass change are determined as a function of the proportion of catabolic power that is converted into biomass - either new microorganisms or the replacement of existing cell components - and the amount of energy that is required to synthesize biomass. The constraints that hydrothermal conditions place on power supply and demand are explicitly taken into account. The chemical composition, including the concentrations of organic compounds, of diffuse and focused flow hydrothermal fluids, hydrothermally influenced sediment pore water and fluids from the oceanic lithosphere are used in conjunction with cell count data and the model described above to constrain the rates of microbial processes that influence the carbon cycle in the Juan de Fuca hydrothermal system.
Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam
2015-01-01
To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902
NASA Technical Reports Server (NTRS)
Sterritt, Roy (Inventor); Hinchey, Michael G. (Inventor); Penn, Joaquin (Inventor)
2011-01-01
Systems, methods and apparatus are provided through which in some embodiments, an agent-oriented specification modeled with MaCMAS, is analyzed, flaws in the agent-oriented specification modeled with MaCMAS are corrected, and an implementation is derived from the corrected agent-oriented specification. Described herein are systems, method and apparatus that produce fully (mathematically) tractable development of agent-oriented specification(s) modeled with methodology fragment for analyzing complex multiagent systems (MaCMAS) and policies for autonomic systems from requirements through to code generation. The systems, method and apparatus described herein are illustrated through an example showing how user formulated policies can be translated into a formal mode which can then be converted to code. The requirements-based programming systems, method and apparatus described herein may provide faster, higher quality development and maintenance of autonomic systems based on user formulation of policies.
Low Molecular Weight Polymethacrylates as Multi-Functional Lubricant Additives
Cosimbescu, Lelia; Vellore, Azhar; Shantini Ramasamy, Uma; ...
2018-04-24
In this study, low molecular weight, moderately polar polymethacrylate polymers are explored as potential multi-functional lubricant additives. The performance of these novel additives in base oil is evaluated in terms of their viscosity index, shear stability, and friction-and-wear. The new compounds are compared to two benchmarks, a typical polymeric viscosity modifier and a fully-formulated oil. Results show that the best performing of the new polymers exhibit viscosity index and friction comparable to that of both benchmarks, far superior shear stability to either benchmark (as much as 15x lower shear loss), and wear reduction significantly better than a typical viscosity modifiermore » (lower wear volume by a factor of 2-3). The findings also suggest that the polarity and molecular weight of the polymers affect their performance which suggests future synthetic strategies may enable this new class of additives to replace multiple additives in typical lubricant formulations.« less
NASA Astrophysics Data System (ADS)
Mišković, Zoran L.; Akbari, Kamran; Segui, Silvina; Gervasoni, Juana L.; Arista, Néstor R.
2018-05-01
We present a fully relativistic formulation for the energy loss rate of a charged particle moving parallel to a sheet containing two-dimensional electron gas, allowing that its in-plane polarization may be described by different longitudinal and transverse conductivities. We apply our formulation to the case of a doped graphene layer in the terahertz range of frequencies, where excitation of the Dirac plasmon polariton (DPP) in graphene plays a major role. By using the Drude model with zero damping we evaluate the energy loss rate due to excitation of the DPP, and show that the retardation effects are important when the incident particle speed and its distance from graphene both increase. Interestingly, the retarded energy loss rate obtained in this manner may be both larger and smaller than its non-retarded counterpart for different combinations of the particle speed and distance.
Bidirectional composition on lie groups for gradient-based image alignment.
Mégret, Rémi; Authesserre, Jean-Baptiste; Berthoumieu, Yannick
2010-09-01
In this paper, a new formulation based on bidirectional composition on Lie groups (BCL) for parametric gradient-based image alignment is presented. Contrary to the conventional approaches, the BCL method takes advantage of the gradients of both template and current image without combining them a priori. Based on this bidirectional formulation, two methods are proposed and their relationship with state-of-the-art gradient based approaches is fully discussed. The first one, i.e., the BCL method, relies on the compositional framework to provide the minimization of the compensated error with respect to an augmented parameter vector. The second one, the projected BCL (PBCL), corresponds to a close approximation of the BCL approach. A comparative study is carried out dealing with computational complexity, convergence rate and frequence of convergence. Numerical experiments using a conventional benchmark show the performance improvement especially for asymmetric levels of noise, which is also discussed from a theoretical point of view.
Hierarchical Boltzmann simulations and model error estimation
NASA Astrophysics Data System (ADS)
Torrilhon, Manuel; Sarna, Neeraj
2017-08-01
A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.
Finite element solution of optimal control problems with inequality constraints
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1990-01-01
A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.
Second generation PMR polyimide/fiber composites
NASA Technical Reports Server (NTRS)
Cavano, P. J.
1979-01-01
A second generation polymerization monomeric reactants (PMR) polyimdes matrix system (PMR 2) was characterized in both neat resin and composite form with two different graphite fiber reinforcements. Three different formulated molecular weight levels of laboratory prepared PMR 2 were examined, in addition to a purchased experimental fully formulated PMR 2 precurser solution. Isothermal aging of graphite fibers, neat resin samples and composite specimens in air at 316 C were investigated. Humidity exposures at 65 C and 97 percent relative humidity were conducted for both neat resin and composites for eight day periods. Anaerobic char of neat resin and fire testing of composites were conducted with PMR 15, PMR 2, and an epoxy system. Composites were fire tested on a burner rig developed for this program. Results indicate that neat PMR 2 resins exhibit excellent isothermal resistance and that PMR 2 composite properties appear to be influenced by the thermo-oxidative stability of the reinforcing fiber.
Nakata, Hiroya; Fedorov, Dmitri G; Nagata, Takeshi; Kitaura, Kazuo; Nakamura, Shinichiro
2015-07-14
The fully analytic first and second derivatives of the energy in the frozen domain formulation of the fragment molecular orbital (FMO) were developed and applied to locate transition states and determine vibrational contributions to free energies. The development is focused on the frozen domain with dimers (FDD) model. The intrinsic reaction coordinate method was interfaced with FMO. Simulations of IR and Raman spectra were enabled using FMO/FDD by developing the calculation of intensities. The accuracy is evaluated for S(N)2 reactions in explicit solvent, and for the free binding energies of a protein-ligand complex of the Trp cage protein (PDB: 1L2Y ). FMO/FDD is applied to study the keto-enol tautomeric reaction of phosphoglycolohydroxamic acid and the triosephosphate isomerase (PDB: 7TIM ), and the role of amino acid residue fragments in the reaction is discussed.
Low Molecular Weight Polymethacrylates as Multi-Functional Lubricant Additives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cosimbescu, Lelia; Vellore, Azhar; Shantini Ramasamy, Uma
In this study, low molecular weight, moderately polar polymethacrylate polymers are explored as potential multi-functional lubricant additives. The performance of these novel additives in base oil is evaluated in terms of their viscosity index, shear stability, and friction-and-wear. The new compounds are compared to two benchmarks, a typical polymeric viscosity modifier and a fully-formulated oil. Results show that the best performing of the new polymers exhibit viscosity index and friction comparable to that of both benchmarks, far superior shear stability to either benchmark (as much as 15x lower shear loss), and wear reduction significantly better than a typical viscosity modifiermore » (lower wear volume by a factor of 2-3). The findings also suggest that the polarity and molecular weight of the polymers affect their performance which suggests future synthetic strategies may enable this new class of additives to replace multiple additives in typical lubricant formulations.« less
Application of an enriched FEM technique in thermo-mechanical contact problems
NASA Astrophysics Data System (ADS)
Khoei, A. R.; Bahmani, B.
2018-02-01
In this paper, an enriched FEM technique is employed for thermo-mechanical contact problem based on the extended finite element method. A fully coupled thermo-mechanical contact formulation is presented in the framework of X-FEM technique that takes into account the deformable continuum mechanics and the transient heat transfer analysis. The Coulomb frictional law is applied for the mechanical contact problem and a pressure dependent thermal contact model is employed through an explicit formulation in the weak form of X-FEM method. The equilibrium equations are discretized by the Newmark time splitting method and the final set of non-linear equations are solved based on the Newton-Raphson method using a staggered algorithm. Finally, in order to illustrate the capability of the proposed computational model several numerical examples are solved and the results are compared with those reported in literature.
Smoothed Particle Hydrodynamics Simulations of Ultrarelativistic Shocks with Artificial Viscosity
NASA Astrophysics Data System (ADS)
Siegler, S.; Riffert, H.
2000-03-01
We present a fully Lagrangian conservation form of the general relativistic hydrodynamic equations for perfect fluids with artificial viscosity in a given arbitrary background spacetime. This conservation formulation is achieved by choosing suitable Lagrangian time evolution variables, from which the generic fluid variables of rest-mass density, 3-velocity, and thermodynamic pressure have to be determined. We present the corresponding equations for an ideal gas and show the existence and uniqueness of the solution. On the basis of the Lagrangian formulation we have developed a three-dimensional general relativistic smoothed particle hydrodynamics (SPH) code using the standard SPH formalism as known from nonrelativistic fluid dynamics. One-dimensional simulations of a shock tube and a wall shock are presented together with a two-dimensional test calculation of an inclined shock tube. With our method we can model ultrarelativistic fluid flows including shocks with Lorentz factors of even 1000.
Wan, Xiaoqing; Zhao, Chunhui
2017-06-01
As a competitive machine learning algorithm, the stacked sparse autoencoder (SSA) has achieved outstanding popularity in exploiting high-level features for classification of hyperspectral images (HSIs). In general, in the SSA architecture, the nodes between adjacent layers are fully connected and need to be iteratively fine-tuned during the pretraining stage; however, the nodes of previous layers further away may be less likely to have a dense correlation to the given node of subsequent layers. Therefore, to reduce the classification error and increase the learning rate, this paper proposes the general framework of locally connected SSA; that is, the biologically inspired local receptive field (LRF) constrained SSA architecture is employed to simultaneously characterize the local correlations of spectral features and extract high-level feature representations of hyperspectral data. In addition, the appropriate receptive field constraint is concurrently updated by measuring the spatial distances from the neighbor nodes to the corresponding node. Finally, the efficient random forest classifier is cascaded to the last hidden layer of the SSA architecture as a benchmark classifier. Experimental results on two real HSI datasets demonstrate that the proposed hierarchical LRF constrained stacked sparse autoencoder and random forest (SSARF) provides encouraging results with respect to other contrastive methods, for instance, the improvements of overall accuracy in a range of 0.72%-10.87% for the Indian Pines dataset and 0.74%-7.90% for the Kennedy Space Center dataset; moreover, it generates lower running time compared with the result provided by similar SSARF based methodology.
NASA Astrophysics Data System (ADS)
Dardiri, Ahmad; Sutrisno, Kuncoro, Tri; Ichwanto, Muhamad Aris; Suparji
2017-09-01
Professionalism of construction workers is one of the keys to the success of infrastructure development projects. The professionalism of the workforce is demonstrated through the possession of expertise competence certificate (SKA) and/or certificates of skills (SKT) issued formally through competency tests by the National Construction Cervices Development Agency (LPJKN). The magnitude of the national skilled manpower needs has not been able to meet the availability of professional workforce. Strategies to develop the quality of resources require sufficient information on the characteristics of the resources themselves, facilities, constraints, stakeholder support, regulations, and socioeconomic as well as cultural conditions. The problems faced by Indonesia in improving the competitiveness of skilled construction workers are (1) how the level of professionalism of skill workers in construction field, (2) what the constrains on improving the quality of skilled construction workers,and(3) how the appropriate model of education and training skillfull construction work. The study was designed with quantitative and qualitative approaches. Quantitative methods were used to describe the profile of sklill constructions worker. Qualitative methods were used toidentify constraintsin improving the qualityof skilled labor, as well as formulate a viable collaborative education and training model for improving the quality of skill labor. Data were collected by documentation, observation, and interview. The result of the study indicate theat (1) the professionalism knowledge of skilled constructions worker are in still low condition, (2) the constrain faced in developing the quality of skilled construction labor cover economic and structural constrains, and (3) collaborative eduction and training model can improve the quality ods skilld labor contructions.
Can re-regulation reservoirs and batteries cost-effectively mitigate sub-daily hydropeaking?
NASA Astrophysics Data System (ADS)
Haas, J.; Nowak, W.; Anindito, Y.; Olivares, M. A.
2017-12-01
To compensate for mismatches between generation and load, hydropower plants frequently operate in strong hydropeaking schemes, which is harmful to the downstream ecosystem. Furthermore, new power market structures and variable renewable systems may exacerbate this behavior. Ecological constraints (minimum flows, maximum ramps) are frequently used to mitigate hydropeaking, but these stand in direct tradeoff with the operational flexibility required for integrating renewable technologies. Fortunately, there are also physical methods (i.e. re-regulation reservoirs and batteries) but to date, there are no studies about their cost-effectiveness for hydropeaking mitigation. This study aims to fill that gap. For this, we formulate an hourly mixed-integer linear optimization model to plan the weekly operation of a hydro-thermal-renewable power system from southern Chile. The opportunity cost of water (needed for this weekly scheduling) is obtained from a mid-term programming solved with dynamic programming. We compare the current (unconstrained) hydropower operation with an ecologically constrained operation. The resulting cost increase is then contrasted with the annual payments necessary for the physical hydropeaking mitigation options. For highly constrained operations, both re-regulation reservoirs and batteries show to be economically attractive for hydropeaking mitigation. For intermediate constrained scenarios, re-regulation reservoirs are still economic, whereas batteries can be a viable solution only if they become cheaper in future. Given current cost projections, their break-even point (for hydropeaking mitigation) is expected within the next ten years. Finally, less stringent hydropeaking constraints do not justify physical mitigation measures, as the necessary flexibility can be provided by other power plants of the system.
Constrained dynamics approach for motion synchronization and consensus
NASA Astrophysics Data System (ADS)
Bhatia, Divya
In this research we propose to develop constrained dynamical systems based stable attitude synchronization, consensus and tracking (SCT) control laws for the formation of rigid bodies. The generalized constrained dynamics Equations of Motion (EOM) are developed utilizing constraint potential energy functions that enforce communication constraints. Euler-Lagrange equations are employed to develop the non-linear constrained dynamics of multiple vehicle systems. The constraint potential energy is synthesized based on a graph theoretic formulation of the vehicle-vehicle communication. Constraint stabilization is achieved via Baumgarte's method. The performance of these constrained dynamics based formations is evaluated for bounded control authority. The above method has been applied to various cases and the results have been obtained using MATLAB simulations showing stability, synchronization, consensus and tracking of formations. The first case corresponds to an N-pendulum formation without external disturbances, in which the springs and the dampers connected between the pendulums act as the communication constraints. The damper helps in stabilizing the system by damping the motion whereas the spring acts as a communication link relaying relative position information between two connected pendulums. Lyapunov stabilization (energy based stabilization) technique is employed to depict the attitude stabilization and boundedness. Various scenarios involving different values of springs and dampers are simulated and studied. Motivated by the first case study, we study the formation of N 2-link robotic manipulators. The governing EOM for this system is derived using Euler-Lagrange equations. A generalized set of communication constraints are developed for this system using graph theory. The constraints are stabilized using Baumgarte's techniques. The attitude SCT is established for this system and the results are shown for the special case of three 2-link robotic manipulators. These methods are then applied to the formation of N-spacecraft. Modified Rodrigues Parameters (MRP) are used for attitude representation of the spacecraft because of their advantage of being a minimum parameter representation. Constrained non-linear equations of motion for this system are developed and stabilized using a Proportional-Derivative (PD) controller derived based on Baumgarte's method. A system of 3 spacecraft is simulated and the results for SCT are shown and analyzed. Another problem studied in this research is that of maintaining SCT under unknown external disturbances. We use an adaptive control algorithm to derive control laws for the actuator torques and develop an estimation law for the unknown disturbance parameters to achieve SCT. The estimate of the disturbance is added as a feed forward term in the actual control law to obtain the stabilization of a 3-spacecraft formation. The disturbance estimates are generated via a Lyapunov analysis of the closed loop system. In summary, the constrained dynamics method shows a lot of potential in formation control, achieving stabilization, synchronization, consensus and tracking of a set of dynamical systems.
Neugebauer, E A M; Wilkinson, R C; Kehlet, H; Schug, S A
2007-07-01
Many patients still suffer severe acute pain in the postoperative period. Although guidelines for treating acute pain are widely published and promoted, most do not consider procedure-specific differences in pain experienced or in techniques that may be most effective and appropriate for different surgical settings. The procedure-specific postoperative pain management (PROSPECT) Working Group provides procedure-specific recommendations for postoperative pain management together with supporting evidence from systematic literature reviews and related procedures at http://www.postoppain.org The methodology for PROSPECT reviews was developed and refined by discussion of the Working Group, and it adapts existing methods for formulation of consensus recommendations to the specific requirements of PROSPECT. To formulate PROSPECT recommendations, we use a methodology that takes into account study quality and source and level of evidence, and we use recognized methods for achieving group consensus, thus reducing potential bias. The new methodology is first applied in full for the 2006 update of the PROSPECT review of postoperative pain management for laparoscopic cholecystectomy. Transparency in PROSPECT processes allows the users to be fully aware of any limitations of the evidence and recommendations, thereby allowing for appropriate decisions in their own practice setting.
Expansion of seasonal influenza vaccination in the Americas
Ropero-Álvarez, Alba María; Kurtis, Hannah J; Danovaro-Holliday, M Carolina; Ruiz-Matus, Cuauhtémoc; Andrus, Jon K
2009-01-01
Background Seasonal influenza is a viral disease whose annual epidemics are estimated to cause three to five million cases of severe illness and 250,000 to 500,000 deaths worldwide. Vaccination is the main strategy for primary prevention. Methods To assess the status of influenza vaccination in the Americas, influenza vaccination data reported to the Pan American Health Organization (PAHO) through 2008 were analyzed. Results Thirty-five countries and territories administered influenza vaccine in their public health sector, compared to 13 countries in 2004. Targeted risk groups varied. Sixteen countries reported coverage among older adults, ranging from 21% to 100%; coverage data were not available for most countries and targeted populations. Some tropical countries used the Northern Hemisphere vaccine formulation and others used the Southern Hemisphere vaccine formulation. In 2008, approximately 166.3 million doses of seasonal influenza vaccine were purchased in the Americas; 30 of 35 countries procured their vaccine through PAHO's Revolving Fund. Conclusion Since 2004 there has been rapid uptake of seasonal influenza vaccine in the Americas. Challenges to fully implement influenza vaccination remain, including difficulties measuring coverage rates, variable vaccine uptake, and limited surveillance and effectiveness data to guide decisions regarding vaccine formulation and timing, especially in tropical countries. PMID:19778430
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashcraft, C. Chace; Niederhaus, John Henry; Robinson, Allen C.
We present a verification and validation analysis of a coordinate-transformation-based numerical solution method for the two-dimensional axisymmetric magnetic diffusion equation, implemented in the finite-element simulation code ALEGRA. The transformation, suggested by Melissen and Simkin, yields an equation set perfectly suited for linear finite elements and for problems with large jumps in material conductivity near the axis. The verification analysis examines transient magnetic diffusion in a rod or wire in a very low conductivity background by first deriving an approximate analytic solution using perturbation theory. This approach for generating a reference solution is shown to be not fully satisfactory. A specializedmore » approach for manufacturing an exact solution is then used to demonstrate second-order convergence under spatial refinement and tem- poral refinement. For this new implementation, a significant improvement relative to previously available formulations is observed. Benefits in accuracy for computed current density and Joule heating are also demonstrated. The validation analysis examines the circuit-driven explosion of a copper wire using resistive magnetohydrodynamics modeling, in comparison to experimental tests. The new implementation matches the accuracy of the existing formulation, with both formulations capturing the experimental burst time and action to within approximately 2%.« less
Multi-dimensional upwinding-based implicit LES for the vorticity transport equations
NASA Astrophysics Data System (ADS)
Foti, Daniel; Duraisamy, Karthik
2017-11-01
Complex turbulent flows such as rotorcraft and wind turbine wakes are characterized by the presence of strong coherent structures that can be compactly described by vorticity variables. The vorticity-velocity formulation of the incompressible Navier-Stokes equations is employed to increase numerical efficiency. Compared to the traditional velocity-pressure formulation, high order numerical methods and sub-grid scale models for the vorticity transport equation (VTE) have not been fully investigated. Consistent treatment of the convection and stretching terms also needs to be addressed. Our belief is that, by carefully designing sharp gradient-capturing numerical schemes, coherent structures can be more efficiently captured using the vorticity-velocity formulation. In this work, a multidimensional upwind approach for the VTE is developed using the generalized Riemann problem-based scheme devised by Parish et al. (Computers & Fluids, 2016). The algorithm obtains high resolution by augmenting the upwind fluxes with transverse and normal direction corrections. The approach is investigated with several canonical vortex-dominated flows including isolated and interacting vortices and turbulent flows. The capability of the technique to represent sub-grid scale effects is also assessed. Navy contract titled ``Turbulence Modelling Across Disparate Length Scales for Naval Computational Fluid Dynamics Applications,'' through Continuum Dynamics, Inc.
The Search for Transient Mass Loss Events on Active Stars and Their Impacts
NASA Astrophysics Data System (ADS)
Crosley, Michael K.
2018-01-01
The conditions that determine the potential habitability of exoplanets are very diverse and still poorly understood. Magnetic eruptive events, such as flares and coronal mass ejections (CME's) are one such concern. Stellar flares are routinely observed and on cool stars but clear signatures of stellar CME's have been less forthcoming. CME’s are geoeffective and contribute to space weather. Stellar coronal mass ejections remain experimentally unconstrained, unlike the stellar flare counterpart which are observed ubiquitously across the electromagnetic spectrum. Low frequency radio bursts in the form of a type II burst offer the best means of identifying and constraining the rate and properties of stellar CME’s. CME properties can be further constrained and solar scaling relationships tested by simultaneously preforming flare observations. The interpretation for the multi-wavelength analysis of type II events and their associated flares is tested by fully constrained solar observations. There we find that velocity measurements are typically accurate to within a factor of two and that mass constraints are accurate to within an order of magnitude. We take these lessons and apply them to observations of the nearby, active M dwarf stars YZ Cmi and EQ Peg. These stars have the advantage of being well observed and constrained. Their well documented high flare activity is expected to be accompanied with high CME activity. They have been shown to have low frequency radio bursts in the past, and their constrained coronal properties allows us to extract the information required to interpret the type II burst. We report on 15 hours of Low Frequency Array (10-190 MHz) observations of YZ Cmi and to 64 hours of EQ Peg observations at the Jansky Very Large Array (230-470 MHz), 20 hours of which were observed simultaneously for flares at the Apache Point Observatory. During this time, solar scaling relationships tells us that ~70 large flares should have been produced which would be associated to a corresponding CME as well. From our results we can constraint event properties, detection limits, CME models, and atmospheric models.
Biosynthetic Polymers as Functional Materials
2016-01-01
The synthesis of functional polymers encoded with biomolecules has been an extensive area of research for decades. As such, a diverse toolbox of polymerization techniques and bioconjugation methods has been developed. The greatest impact of this work has been in biomedicine and biotechnology, where fully synthetic and naturally derived biomolecules are used cooperatively. Despite significant improvements in biocompatible and functionally diverse polymers, our success in the field is constrained by recognized limitations in polymer architecture control, structural dynamics, and biostabilization. This Perspective discusses the current status of functional biosynthetic polymers and highlights innovative strategies reported within the past five years that have made great strides in overcoming the aforementioned barriers. PMID:27375299
Wavelet-based scalable L-infinity-oriented compression.
Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter
2006-09-01
Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.
A configuration development strategy for the NASP
NASA Astrophysics Data System (ADS)
Snyder, Curtis D.; Pinckney, S. Z.
Characteristics of airframe-integrated scramjet (AIS) aerospacecraft are studied using elementary and a recently developed AIS analysis code. Of principal interest is the definition of the AIS and what concepts offer the most potential. One of the concepts is selected for a limited optimization study aimed at defining the relationship of exhaust area to performance potential. The study shows that, if the AIS vehicle is to be fully constrained within the 'optimum' flowpath envelope, large values of exhaust-area-to-capture-area ratio are desired. A benefit of this choice is that performance at the very highest airbreather speeds is improved and, thus, may delay the need to switch to rocket power.
Evidence for a nonplanar amplituhedron
Bern, Zvi; Herrmann, Enrico; Litsey, Sean; ...
2016-06-17
The scattering amplitudes of planar N = 4 super-Yang-Mills exhibit a number of remarkable analytic structures, including dual conformal symmetry and logarithmic singularities of integrands. The amplituhedron is a geometric construction of the integrand that incorporates these structures. This geometric construction further implies the amplitude is fully specified by constraining it to vanish on spurious residues. By writing the amplitude in a dlog basis, we provide nontrivial evidence that these analytic properties and “zero conditions” carry over into the nonplanar sector. Finally, this suggests that the concept of the amplituhedron can be extended to the nonplanar sector of N =more » 4 super-Yang-Mills theory.« less
Variational formulation of hybrid problems for fully 3-D transonic flow with shocks in rotor
NASA Technical Reports Server (NTRS)
Liu, Gao-Lian
1991-01-01
Based on previous research, the unified variable domain variational theory of hybrid problems for rotor flow is extended to fully 3-D transonic rotor flow with shocks, unifying and generalizing the direct and inverse problems. Three variational principles (VP) families were established. All unknown boundaries and flow discontinuities (such as shocks, free trailing vortex sheets) are successfully handled via functional variations with variable domain, converting almost all boundary and interface conditions, including the Rankine Hugoniot shock relations, into natural ones. This theory provides a series of novel ways for blade design or modification and a rigorous theoretical basis for finite element applications and also constitutes an important part of the optimal design theory of rotor bladings. Numerical solutions to subsonic flow by finite elements with self-adapting nodes given in Refs., show good agreement with experimental results.
NASA Astrophysics Data System (ADS)
González-Ausejo, Jennifer; Sánchez-Safont, Estefania; Cabedo, Luis; Gamez-Perez, Jose
2016-11-01
Poly(hydroxyl butyrate-co-valerate) (PHBV) is a biopolymer synthesized by microorganisms that is fully biodegradable with improved thermal and tensile properties with respect to some commodity plastics. However, it presents an intrinsic brittleness that limits its potential application in replacing plastics in packaging applications. Films made of blends of PHBV with different contents of thermoplastic polyurethane (TPU) were prepared by single screw extruder and their fracture toughness behavior was assessed by means of the essential work of fracture (EWF) Method. As the crack propagation was not always stable, a partition method has been used to compare all formulations and to relate results with the morphology of the blends. Indeed, fully characterization of the different PHBV/TPU blends showed that PHBV was incompatible with TPU. The blends showed an improvement of the toughness fracture, finding a maximum with intermediate TPU contents.
Enhanced definition and required examples of common datum imposed by ISO standard
NASA Astrophysics Data System (ADS)
Yan, Yiqing; Bohn, Martin
2017-12-01
According to the ISO Geometrical Product Specifications (GPS), the establishment and definition of common datum for geometrical components are not fully defined. There are two main limitations of this standard. Firstly: the explications of ISO examples of common datums are not matched with their corresponding definitions, and secondly: a full definition of common datum is missing. This paper suggests a new approach for an enhanced definition and concrete examples of common datum and proposes a holistic methodology for establishment of common datum for each geometrical component. This research is based on the analysis of physical behaviour of geometrical components, orientation constraints and invariance classes of datums. This approach fills the definition gaps of common datum based on ISO GPS, thereby eliminating those deficits. As a result, an improved methodology for a fully functional defined definition of common datum was formulated.
NASA Technical Reports Server (NTRS)
Scudder, J. D.; Olbert, S.
1983-01-01
The breakdown of the classical (CBES) field aligned transport relations for electrons in an inhomogeneous, fully ionized plasma as a mathematical issue of radius of convergence is addressed, the finite Knudsen number conditions when CBES results are accurate is presented and a global-local (GL) way to describe the results of Coulomb physics moderated conduction that is more nearly appropriate for astrophysical plasmas are defined. This paper shows the relationship to and points of departure of the present work from the CBES approach. The CBES heat law in current use is shown to be an especially restrictive special case of the new, more general GL result. A preliminary evaluation of the dimensionless heat function, using analytic formulas, shows that the dimensionless heat function profiles versus density of the type necessary for a conduction supported high speed solar wind appear possible.
Probabilistic DHP adaptive critic for nonlinear stochastic control systems.
Herzallah, Randa
2013-06-01
Following the recently developed algorithms for fully probabilistic control design for general dynamic stochastic systems (Herzallah & Káarnáy, 2011; Kárný, 1996), this paper presents the solution to the probabilistic dual heuristic programming (DHP) adaptive critic method (Herzallah & Káarnáy, 2011) and randomized control algorithm for stochastic nonlinear dynamical systems. The purpose of the randomized control input design is to make the joint probability density function of the closed loop system as close as possible to a predetermined ideal joint probability density function. This paper completes the previous work (Herzallah & Káarnáy, 2011; Kárný, 1996) by formulating and solving the fully probabilistic control design problem on the more general case of nonlinear stochastic discrete time systems. A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained. Copyright © 2013 Elsevier Ltd. All rights reserved.
5D Tempest simulations of kinetic edge turbulence
NASA Astrophysics Data System (ADS)
Xu, X. Q.; Xiong, Z.; Cohen, B. I.; Cohen, R. H.; Dorr, M. R.; Hittinger, J. A.; Kerbel, G. D.; Nevins, W. M.; Rognlien, T. D.; Umansky, M. V.; Qin, H.
2006-10-01
Results are presented from the development and application of TEMPEST, a nonlinear five dimensional (3d2v) gyrokinetic continuum code. The simulation results and theoretical analysis include studies of H-mode edge plasma neoclassical transport and turbulence in real divertor geometry and its relationship to plasma flow generation with zero external momentum input, including the important orbit-squeezing effect due to the large electric field flow-shear in the edge. In order to extend the code to 5D, we have formulated a set of fully nonlinear electrostatic gyrokinetic equations and a fully nonlinear gyrokinetic Poisson's equation which is valid for both neoclassical and turbulence simulations. Our 5D gyrokinetic code is built on 4D version of Tempest neoclassical code with extension to a fifth dimension in binormal direction. The code is able to simulate either a full torus or a toroidal segment. Progress on performing 5D turbulence simulations will be reported.
NASA Astrophysics Data System (ADS)
Mozaffar, A.; Schoon, N.; Digrado, A.; Bachy, A.; Delaplace, P.; du Jardin, P.; Fauconnier, M.-L.; Aubinet, M.; Heinesch, B.; Amelynck, C.
2017-03-01
Because of its high abundance and long lifetime compared to other volatile organic compounds in the atmosphere, methanol (CH3OH) plays an important role in atmospheric chemistry. Even though agricultural crops are believed to be a large source of methanol, emission inventories from those crop ecosystems are still scarce and little information is available concerning the driving mechanisms for methanol production and emission at different developmental stages of the plants/leaves. This study focuses on methanol emissions from Zea mays L. (maize), which is vastly cultivated throughout the world. Flux measurements have been performed on young plants, almost fully grown leaves and fully grown leaves, enclosed in dynamic flow-through enclosures in a temperature and light-controlled environmental chamber. Strong differences in the response of methanol emissions to variations in PPFD (Photosynthetic Photon Flux Density) were noticed between the young plants, almost fully grown and fully grown leaves. Moreover, young maize plants showed strong emission peaks following light/dark transitions, for which guttation can be put forward as a hypothetical pathway. Young plants' average daily methanol fluxes exceeded by a factor of 17 those of almost fully grown and fully grown leaves when expressed per leaf area. Absolute flux values were found to be smaller than those reported in the literature, but in fair agreement with recent ecosystem scale flux measurements above a maize field of the same variety as used in this study. The flux measurements in the current study were used to evaluate the dynamic biogenic volatile organic compound (BVOC) emission model of Niinemets and Reichstein. The modelled and measured fluxes from almost fully grown leaves were found to agree best when a temperature and light dependent methanol production function was applied. However, this production function turned out not to be suitable for modelling the observed emissions from the young plants, indicating that production must be influenced by (an) other parameter(s). This study clearly shows that methanol emission from maize is complex, especially for young plants. Additional studies at different developmental stages of other crop species will be required in order to develop accurate methanol emission algorithms for agricultural crops.
MPDATA: Third-order accuracy for variable flows
NASA Astrophysics Data System (ADS)
Waruszewski, Maciej; Kühnlein, Christian; Pawlowska, Hanna; Smolarkiewicz, Piotr K.
2018-04-01
This paper extends the multidimensional positive definite advection transport algorithm (MPDATA) to third-order accuracy for temporally and spatially varying flows. This is accomplished by identifying the leading truncation error of the standard second-order MPDATA, performing the Cauchy-Kowalevski procedure to express it in a spatial form and compensating its discrete representation-much in the same way as the standard MPDATA corrects the first-order accurate upwind scheme. The procedure of deriving the spatial form of the truncation error was automated using a computer algebra system. This enables various options in MPDATA to be included straightforwardly in the third-order scheme, thereby minimising the implementation effort in existing code bases. Following the spirit of MPDATA, the error is compensated using the upwind scheme resulting in a sign-preserving algorithm, and the entire scheme can be formulated using only two upwind passes. Established MPDATA enhancements, such as formulation in generalised curvilinear coordinates, the nonoscillatory option or the infinite-gauge variant, carry over to the fully third-order accurate scheme. A manufactured 3D analytic solution is used to verify the theoretical development and its numerical implementation, whereas global tracer-transport benchmarks demonstrate benefits for chemistry-transport models fundamental to air quality monitoring, forecasting and control. A series of explicitly-inviscid implicit large-eddy simulations of a convective boundary layer and explicitly-viscid simulations of a double shear layer illustrate advantages of the fully third-order-accurate MPDATA for fluid dynamics applications.
Pediatric Biopharmaceutical Classification System: Using Age-Appropriate Initial Gastric Volume.
Shawahna, Ramzi
2016-05-01
Development of optimized pediatric formulations for oral administration can be challenging, time consuming, and financially intensive process. Since its inception, the biopharmaceutical classification system (BCS) has facilitated the development of oral drug formulations destined for adults. At least theoretically, the BCS principles are applied also to pediatrics. A comprehensive age-appropriate BCS has not been fully developed. The objective of this work was to provisionally classify oral drugs listed on the latest World Health Organization's Essential Medicines List for Children into an age-appropriate BCS. A total of 38 orally administered drugs were included in this classification. Dose numbers were calculated using age-appropriate initial gastric volume for neonates, 6-month-old infants, and children aging 1 year through adulthood. Using age-appropriate initial gastric volume and British National Formulary age-specific dosing recommendations in the calculation of dose numbers, the solubility classes shifted from low to high in pediatric subpopulations of 12 years and older for amoxicillin, 5 years, 12 years and older for cephalexin, 9 years and older for chloramphenicol, 3-4 years, 9-11 and 15 years and older for diazepam, 18 years and older (adult) for doxycycline and erythromycin, 8 years and older for phenobarbital, 10 years and older for prednisolone, and 15 years and older for trimethoprim. Pediatric biopharmaceutics are not fully understood where several knowledge gaps have been recently emphasized. The current biowaiver criteria are not suitable for safe application in all pediatric populations.
Penazzato, Martina; Lewis, Linda; Watkins, Melynda; Prabhu, Vineet; Pascual, Fernando; Auton, Martin; Kreft, Wesley; Morin, Sébastien; Vicari, Marissa; Lee, Janice; Jamieson, David; Siberry, George K
2018-02-01
Despite the coordinated efforts by several stakeholders to speed up access to HIV treatment for children, development of optimal paediatric formulations still lags 8 to 10 years behind that of adults, due mainly to lack of market incentives and technical complexities in manufacturing. The small and fragmented paediatric market also hinders launch and uptake of new formulations. Moreover, the problems affecting HIV similarly affect other disease areas where development and introduction of optimal paediatric formulations is even slower. Therefore, accelerating processes for developing and commercializing optimal paediatric drug formulations for HIV and other disease areas is urgently needed. The Global Accelerator for Paediatric Formulations (GAP-f) is an innovative collaborative model that will accelerate availability of optimized treatment options for infectious diseases, such as HIV, tuberculosis and viral hepatitis, affecting children in low- and middle-income countries (LMICs). It builds on the HIV experience and existing efforts in paediatric drug development, formalizing collaboration between normative bodies, research networks, regulatory agencies, industry, supply and procurement organizations and funding bodies. Upstream, the GAP-f will coordinate technical support to companies to design and study optimal paediatric formulations, harmonize efforts with regulators and incentivize manufacturers to conduct formulation development. Downstream, the GAP-f will reinforce coordinated procurement and communication with suppliers. The GAP-f will be implemented in a three-stage process: (1) development of a strategic framework and promotion of key regulatory efficiencies; (2) testing of feasibility and results, building on the work of existing platforms such as the Paediatric HIV Treatment Initiative (PHTI) including innovative approaches to incentivize generic development and (3) launch as a fully functioning structure. GAP-f is a key partnership example enhancing North-South and international cooperation on and access to science and technology and capacity building, responding to Sustainable Development Goal (SDG) 17.6 (technology) and 17.9. (capacity-building). By promoting access to the most needed paediatric formulations for HIV and high-burden infectious diseases in low-and middle-income countries, GAP-f will support achievement of SDG 3.2 (infant mortality), 3.3 (end of AIDS and combat other communicable diseases) and 3.8 (access to essential medicines), and be an essential component of meeting the global Start Free, Stay Free, AIDS Free super-fast-track targets. © 2018 World Health Organization; licensee IAS.
A Low-mass Exoplanet Candidate Detected by K2 Transiting the Praesepe M Dwarf JS 183
NASA Astrophysics Data System (ADS)
Pepper, Joshua; Gillen, Ed; Parviainen, Hannu; Hillenbrand, Lynne A.; Cody, Ann Marie; Aigrain, Suzanne; Stauffer, John; Vrba, Frederick J.; David, Trevor; Lillo-Box, Jorge; Stassun, Keivan G.; Conroy, Kyle E.; Pope, Benjamin J. S.; Barrado, David
2017-04-01
We report the discovery of a repeating photometric signal from a low-mass member of the Praesepe open cluster that we interpret as a Neptune-sized transiting planet. The star is JS 183 (HSHJ 163, EPIC 211916756), with T eff = 3325 ± 100 K, M * = 0.44 ± 0.04 M ⊙, R * = 0.44 ± 0.03 R ⊙, and {log}{g}* = 4.82+/- 0.06. The planet has an orbital period of 10.134588 days and a radius of R P = 0.32 ± 0.02 R J. Since the star is faint at V = 16.5 and J = 13.3, we are unable to obtain a measured radial velocity orbit, but we can constrain the companion mass to below about 1.7 M J, and thus well below the planetary boundary. JS 183b (since designated as K2-95b) is the second transiting planet found with K2 that resides in a several-hundred-megayear open cluster; both planets orbit mid-M dwarf stars and are approximately Neptune sized. With a well-determined stellar density from the planetary transit, and with an independently known metallicity from its cluster membership, JS 183 provides a particularly valuable test of stellar models at the fully convective boundary. We find that JS 183 is the lowest-density transit host known at the fully convective boundary, and that its very low density is consistent with current models of stars just above the fully convective boundary but in tension with the models just below the fully convective boundary.